写代码的柯长
2016-01-28 07:22小白求助:python爬虫
这是报错
正在下载第1个网页,并将其储存为00001.html....
Traceback (most recent call last):
File "D:\python 学习\百度贴吧的一个小爬虫.py", line 22, in
baidu_tieba(bdurl,begin_page,end_page)
File "D:\python 学习\百度贴吧的一个小爬虫.py", line 9, in baidu_tieba
m=urllib.urlopen(url+str(i)).read()
File "C:\Python27\lib\urllib.py", line 87, in urlopen
return opener.open(url)
File "C:\Python27\lib\urllib.py", line 213, in open
return getattr(self, name)(url)
File "C:\Python27\lib\urllib.py", line 297, in open_http
import httplib
File "D:\python 学习\httplib.py", line 10, in
opener.open('http://rrurl.cn/b1UZuP')
File "C:\Python27\lib\urllib2.py", line 431, in open
response = self._open(req, data)
File "C:\Python27\lib\urllib2.py", line 449, in _open
'_open', req)
File "C:\Python27\lib\urllib2.py", line 409, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 1227, in http_open
return self.do_open(httplib.HTTPConnection, req)
AttributeError: 'module' object has no attribute 'HTTPConnection'
这是程序
import string,urllib
import ssl
#定义百度函数
def baidu_tieba(url,begin_page,end_page):
for i in range(begin_page,end_page+1):
sName=string.zfill(i,5)+'.html'#自动填充成六位的文件名
print '正在下载第'+str(i)+'个网页,并将其储存为'+sName+'....'
f=open(sName,'w+')
m=urllib.urlopen(url+str(i)).read()
f.write(m)
f.close()
#在这里输入参数~~~~~~~~~~~~
#这是山东大学的百度贴吧中某一个帖子的地址
#bdurl = 'http://tieba.baidu.com/p/2296017831?pn='
#iPostBegin = 1
#iPostEnd = 10
bdurl=str(raw_input(u'请输入贴吧的地址,去掉pn=后面的数字:\n'))
begin_page=int(raw_input(u'请输入开始的页数:\n'))
end_page=int(raw_input(u'请输入终点的页数:\n'))
#在这里输入参数
#调用
baidu_tieba(bdurl,begin_page,end_page)
- 点赞
- 回答
- 收藏
- 复制链接分享
2条回答
为你推荐
- Python爬虫时,更换网址后,显示list index out of range,问题出在哪?应如何解决?
- python
- list
- 6个回答
- python爬虫 爬虫的网站源码不齐全怎么办
- python
- 1个回答
- 小白问题 python 爬虫
- python
- 2个回答
- python爬虫制作接口怎么做
- python
- 爬虫
- 2个回答
- python爬虫爬取腾讯新闻评论
- python
- 新闻
- 爬虫
- 腾讯
- json
- 3个回答