m0_37719865
自沉于海
2017-06-13 07:00

如何把scrapy爬出来的数据导入csv文件

  • python
  • csv
  • scrapy 爬虫

import csv
import pymysql
from scrapy.exceptions import DropItem
class UeinfoPipeline(object):
def process_item(self, item, spider):
list=[]
pinpai = item["pinpai"][0]
xinghao=item["xinghao"][0]
yuefen=item["yuefen"][0]
nianfen=item["nianfen"][0]
list.append(pinpai)
list.append(xinghao)
list.append(yuefen)
list.append(nianfen)
with open("test.csv", "w") as csvfile:
fieldnames = ['first_name', 'last_name','username']
writer=csv.DictWriter(csvfile,fieldnames=fieldnames)
writer.writeheader()
writer.writerows(list)
return item
def close_spider(self,spider):
self.conn.close()

错误代码如下:
Traceback (most recent call last):
File "d:\programdata\anaconda3\lib\site-packages\twisted\internet\defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "D:\ueinfo\ueinfo\pipelines.py", line 36, in process_item
writer.writerows(list)
File "d:\programdata\anaconda3\lib\csv.py", line 158, in writerows
return self.writer.writerows(map(self._dict_to_list, rowdicts))
File "d:\programdata\anaconda3\lib\csv.py", line 148, in _dict_to_list
wrong_fields = rowdict.keys() - self.fieldnames
AttributeError: 'str' object has no attribute 'keys'
2017-06-13 14:56:40 [scrapy.core.engine] INFO: Closing spider (finished)
2017-06-13 14:56:40 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

  • 点赞
  • 回答
  • 收藏
  • 复制链接分享

0条回答

为你推荐