听摇滚的卡车司机 2020-02-15 23:02 采纳率: 100%
浏览 490

XPath无法准确获取怎么办

参照《从零开始学网络爬虫》案例,爬取豆瓣图书Top250的信息

https://book.douban.com/top250

爬取前需要用XPath获取书名、作者等标签信息,在浏览器中检查网页信息,并右击,copy XPath获取元素的XPath

图片说明

书中原版代码如下

import csv
from lxml import etree
import requests


headers =  {
    'user-agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36'
}
urls = ['https://book.douban.com/top250?start={}'.format(str(i)) for i in range(0,250,25)]
wenben = open('E:\demo.csv','wt',newline='',encoding='utf-8')
writer = csv.writer(wenben)
writer.writerow(('name','url','author','publisher','date','price','rate','comment'))

for url in urls:
    html = requests.get(url,headers=headers)
    selector = etree.HTML(html.text)
    infos = selector.xpath('//tr[@class="item"]')

    for info in infos:
        name = info.xpath('td/div/a/@title')[0]
        url = info.xpath('td/div/a/@href')[0]
        book_infos = info.xpath('td/p/text()')[0]
        author = book_infos.split('/')[0]
        publisher = book_infos.split('/')[-3]
        date = book_infos.split('/')[-2]
        price = book_infos.split('/')[-1]
        rate = info.xpath('td/div/span[2]/text()')[0]
        comments = info.xpath('td/div/span[2]/text()')[0]
        comment = comments[0] if len(comments) != 0 else "空"
        writer.writerow((name,url,author,publisher,date,price,rate,comment))
        print(name)
wenben.close()
print("输出完成!")

可以发现,以书名为例,原版中获取的XPath如下

'td/div/a/@title'

但是我通过浏览器检查元素获取到的XPath如下

*[@id="content"]/div/div[1]/div/table[1]/tbody/tr/td[2]/div[1]/a

而且按照自己获取的XPath进行爬取,并不能爬取到网页信息。只有按照原版的XPath才能正确爬取到网页信息。
请问各位大神,为什么从浏览器端获取的XPath与案例并不一致,如何自行获取正确的XPath

  • 写回答

1条回答 默认 最新

  • 7*24 工作者 2020-02-16 16:23
    关注

    修改后的代码

    #!/usr/bin/env python
    #-*- coding:utf-8 -*-
    import csv
    from lxml import etree
    import requests
    
    headers =  {
        'user-agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36'
    }
    urls = ['https://book.douban.com/top250?start={}'.format(str(i)) for i in range(0,250,25)]
    wenben = open('E:\demo.csv','wt',newline='',encoding='utf-8')
    writer = csv.writer(wenben)
    writer.writerow(('name','url','author','publisher','date','price','rate','comment'))
    
    for url in urls:
        html = requests.get(url,headers=headers)
        selector = etree.HTML(html.text)
        infos = selector.xpath('//tr[@class="item"]')
    
        for info in infos:
            name = info.xpath('td')[1].xpath('./div/a/@title')[0]
            url = info.xpath('td')[1].xpath('./div/a/@href')[0]
            book_infos = info.xpath('td')[1].xpath('./p/text()')[0]
            author = book_infos.split('/')[0]
            publisher = book_infos.split('/')[-3]
            date = book_infos.split('/')[-2]
            price = book_infos.split('/')[-1]
            rate = info.xpath('td')[1].xpath('./div[2]/span[2]/text()')[0]
            comments = info.xpath('td')[1].xpath('./div[2]/span[3]/text()')[0]
            comment = comments.replace("(","").replace(")","").replace("\n","").replace(" ","") if len(comments) != 0 else "空"
            writer.writerow((name,url,author,publisher,date,price,rate,comment))
    
    wenben.close()
    print("输出完成!")
    
    

    我看了下,是你自己定位有问题,在 infos 下面有两个 td 标签,而主要内容都在第2个 td标签下,但是通过td/div/a/@title获取的是第1个 td标签下内容,所有获取不到信息

    评论

报告相同问题?

悬赏问题

  • ¥15 用三极管设计—个共射极放大电路
  • ¥15 请完成下列相关问题!
  • ¥15 drone 推送镜像时候 purge: true 推送完毕后没有删除对应的镜像,手动拷贝到服务器执行结果正确在样才能让指令自动执行成功删除对应镜像,如何解决?
  • ¥15 求daily translation(DT)偏差订正方法的代码
  • ¥15 js调用html页面需要隐藏某个按钮
  • ¥15 ads仿真结果在圆图上是怎么读数的
  • ¥20 Cotex M3的调试和程序执行方式是什么样的?
  • ¥20 java项目连接sqlserver时报ssl相关错误
  • ¥15 一道python难题3
  • ¥15 牛顿斯科特系数表表示