城枫墨凉 2017-12-07 09:21 采纳率: 20%
浏览 1822
已采纳

python爬虫,爬取百度百科python词条页面数据,是这个页面url的抓取不到还是其他原因?

控制台信息
1.URL管理器:
class UrlManager (object):

def __init__(self):
    self.new_urls = set()
    self.old_urls = set()

def add_new_url(self, url):
    if url is None:
        return  # 如果没有新的URL则不进行添加
    if url not in self.new_urls and url not in self.old_urls:
        self.new_urls.add(url)

def add_new_urls(self, urls):
    if urls is None or len(urls) == 0:
        return
    for url in urls:
        self.add_new_url(url)

def get_new_url(self):
    return len(self.new_urls) != 0

def has_new_url(self):
    new_url = self.new_urls.pop()
    self.old_urls.add(new_url)
    return new_url

2.网页下载器:
import urllib.request
import urllib.response
class HtmlDownloader(object):
def download(self, url):

    if url is None:
        return None
    response = urllib.request.urlopen(url)
    if response.getcode() != 200:
        return None
    return response.read()

3.网页解析器:
# coding:utf-8
from bs4 import BeautifulSoup
import re
import urllib.parse
class HtmlParser(object):
def parser(self, page_url, html_content):
if page_url is None or html_content is None:
return
soup = BeautifulSoup(html_content, 'html.parser', from_encoding='utf-8')
new_urls = self._get_new_urls(page_url, soup)
new_data = self._get_new_data(page_url, soup)
return new_urls, new_data

def _get_new_urls(self, page_url, soup):
new_urls = set()
# links = soup.find_all('a', href=re.compile(r"/item/\d+.htm"))
links = soup.find_all('a', href=re.compile(r"/item/(.*)"))
for link in links:
new_url = link['href']
new_full_url = urllib.parse.urljoin(page_url, new_url)
new_urls.add(new_url)
return new_urls

def get_new_data(self, page_url, soup):
res_data = {}
# url
res_data['url'] = page_url
# 标题

Python

(计算机程序设计语言)


# 简介

title_node = soup.find_all('dd', class='lemmaWgt-lemmaTitle-title').find('h1')
res_data['title'] = title_node.get_text()
summary_node = soup.find_all('div', class_='lemma-summary')
res_data['summmary'] = summary_node.get_text()
return res_data

4.页面输出:
class HtmlOutputer(object):

def __init__(self):
    self.datas=[]

def collectData(self, data):
    if data is None:
        return
    self.datas.append(data)

def output_html(self):

    fout = open('output.html', 'w')
    fout.write("<html>")
    fout.write("<body>")
    fout.write("<table>")
    for data in self.datas:
        fout.write("<tr>")
        fout.write("<td>%s</td>" % (data['url']).encode('utf-8'))
        fout.write("<td>%s</td>" % (data['title']).encode('utf-8'))
        fout.write("<td>%s</td>" % (data['summary']).encode('utf-8'))
        fout.write("</tr>")

    fout.write("</table>")
    fout.write("</body>")
    fout.write("</html>")
    fout.close()

  • 写回答

6条回答 默认 最新

  • raygenyang 2017-12-07 15:26
    关注

    def get_new_url(self):
    return len(self.new_urls) != 0

    def has_new_url(self):
    new_url = self.new_urls.pop()
    self.old_urls.add(new_url)
    return new_url

    这两个函数定义反了吧
    
    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(5条)

报告相同问题?

悬赏问题

  • ¥15 file converter 转换格式失败 报错 Error marking filters as finished,如何解决?
  • ¥15 ubuntu系统下挂载磁盘上执行./提示权限不够
  • ¥15 Arcgis相交分析无法绘制一个或多个图形
  • ¥15 关于#r语言#的问题:差异分析前数据准备,报错Error in data[, sampleName1] : subscript out of bounds请问怎么解决呀以下是全部代码:
  • ¥15 seatunnel-web使用SQL组件时候后台报错,无法找到表格
  • ¥15 fpga自动售货机数码管(相关搜索:数字时钟)
  • ¥15 用前端向数据库插入数据,通过debug发现数据能走到后端,但是放行之后就会提示错误
  • ¥30 3天&7天&&15天&销量如何统计同一行
  • ¥30 帮我写一段可以读取LD2450数据并计算距离的Arduino代码
  • ¥15 飞机曲面部件如机翼,壁板等具体的孔位模型