2 github 33873969 github_33873969 于 2016.01.28 15:22 提问

小白求助:python爬虫

这是报错
正在下载第1个网页,并将其储存为00001.html....

Traceback (most recent call last):
File "D:\python 学习\百度贴吧的一个小爬虫.py", line 22, in
baidu_tieba(bdurl,begin_page,end_page)
File "D:\python 学习\百度贴吧的一个小爬虫.py", line 9, in baidu_tieba
m=urllib.urlopen(url+str(i)).read()
File "C:\Python27\lib\urllib.py", line 87, in urlopen
return opener.open(url)
File "C:\Python27\lib\urllib.py", line 213, in open
return getattr(self, name)(url)
File "C:\Python27\lib\urllib.py", line 297, in open_http
import httplib
File "D:\python 学习\httplib.py", line 10, in
opener.open('http://rrurl.cn/b1UZuP')
File "C:\Python27\lib\urllib2.py", line 431, in open
response = self._open(req, data)
File "C:\Python27\lib\urllib2.py", line 449, in _open
'_open', req)
File "C:\Python27\lib\urllib2.py", line 409, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 1227, in http_open
return self.do_open(httplib.HTTPConnection, req)
AttributeError: 'module' object has no attribute 'HTTPConnection'

这是程序
import string,urllib
import ssl
#定义百度函数
def baidu_tieba(url,begin_page,end_page):
for i in range(begin_page,end_page+1):
sName=string.zfill(i,5)+'.html'#自动填充成六位的文件名
print '正在下载第'+str(i)+'个网页,并将其储存为'+sName+'....'
f=open(sName,'w+')
m=urllib.urlopen(url+str(i)).read()
f.write(m)
f.close()
#在这里输入参数~~~~~~~~~~~~
#这是山东大学的百度贴吧中某一个帖子的地址
#bdurl = 'http://tieba.baidu.com/p/2296017831?pn='
#iPostBegin = 1

#iPostEnd = 10

bdurl=str(raw_input(u'请输入贴吧的地址,去掉pn=后面的数字:\n'))
begin_page=int(raw_input(u'请输入开始的页数:\n'))
end_page=int(raw_input(u'请输入终点的页数:\n'))
#在这里输入参数
#调用
baidu_tieba(bdurl,begin_page,end_page)

2个回答

oyljerry
oyljerry   Ds   Rxr 2016.01.28 19:43

你调用的HTTPConnection 这个没有找到对应的函数。

coderCold
coderCold   2016.01.28 20:12

我跑了你的程序,没问题

Csdn user default icon
上传中...
上传图片
插入图片
准确详细的回答,更有利于被提问者采纳,从而获得C币。复制、灌水、广告等回答会被删除,是时候展现真正的技术了!