我就是大白PL 2021-11-21 21:19 采纳率: 33.3%
浏览 411
已结题

请问Python爬虫如何把爬取数据存入csv文件中

Python讲的太快了。。这门课都快结束了我感觉大部分还是一知半解,这次做的爬虫work,要求存为csv格式,我现在是txt格式,不太清楚应该怎么改。。还有就是我的ip已经被封了,代理池什么的我都不知道该加在哪。。这个爬虫的目的是为了爬取豆瓣图书的相关数据。
pipelines代码
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html


# useful for handling different item types with a single interface
from itemadapter import ItemAdapter

import json
class DoubanBooksPipeline:
    def process_item(self, item, spider):
        with open("douban_book_list.csv","a",encoding="utf-8") as f:
            f.write(json.dumps(item,ensure_ascii=False))


settings代码
# Scrapy settings for douban_books project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'douban_books'

SPIDER_MODULES = ['douban_books.spiders']
NEWSPIDER_MODULE = 'douban_books.spiders'
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36'
ITEM_PIPELINES = {
    'douban_books.pipelines.DoubanBooksPipeline': 300,
}
ROBOTSTXT_OBEY = False
LOG_LEVEL = 'WARNING'
# LOG_LEVEL = 'WARNING' #设置日志级别,即输出结果只会显示warning以及warning以上的日志

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'douban_books (+http://www.yourdomain.com)'

# Obey robots.txt rules


# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'douban_books.middlewares.DoubanBooksSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'douban_books.middlewares.DoubanBooksDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'


爬虫book代码
import scrapy
from copy import deepcopy
import re


class BookSpider(scrapy.Spider):
    name = 'book'
    allowed_domains = ['book.douban.com']
    start_urls = ['https://book.douban.com/tag/?view=type']

    def parse(self, response):
        item = {}
        div_list = response.xpath(".//div[@class='article']/div[2]/div")  # 进行分组
        for div in div_list:
            item["big_title"] = div.xpath("./a/@name").extract_first()  # 提取大标签
            tr_list = div.xpath(".//table[@class='tagCol']")  # 进行分组
            for tr in tr_list:
                td_list = tr.xpath(".//td")
                for td in td_list:
                    item["small_title"] = td.xpath("./a/text()").extract_first()
                    item["cate_list_url"] = td.xpath("./a/@href").extract_first()
                    if item["cate_list_url"] is not None:
                        item["cate_list_url"] = 'https://book.douban.com' + item["cate_list_url"]
                        yield scrapy.Request(
                            item["cate_list_url"],
                            callback=self.parse_list,
                            meta={"item": deepcopy(item)}
                        )

    def parse_list(self, response):
        item = response.meta["item"]
        li_list = response.xpath(".//ul[@class='subject-list']/li")  # 分组
        for li in li_list:
            item["book_name"] = li.xpath(".//div[@class='info']/h2/a/@title").extract_first()
            item["book_name"] = re.sub(r"[(\n)(\t)( )]", "", item["book_name"]) #删除书名中的空格与换行符等
            item["book_score"] = li.xpath(".//div[@class='star clearfix']/span[@class='rating_nums']/text()").extract_first()
            book_detail_str = li.xpath(".//div[@class='info']//div[@class='pub']/text()").extract_first()
            book_detail_str = re.sub(r"[(\n)( )]", "", book_detail_str) #提取书籍简要信息,并对简要信息进行切片处理,提取切片中的内容
            book_detail_list = list(book_detail_str.split("/"))
            item["book_price"] = book_detail_list[-1] if len(book_detail_list) > 0 else None
            item["book_author"] = book_detail_list[0] if len(book_detail_list) > 0 else None
            item["book_comment_nums"] = li.xpath(".//div[@class='star clearfix']/span[@class='pl']/text()").extract_first()
            item["book_comment_nums"] = re.sub(r"[(\n)( )]", "", item["book_comment_nums"])
            print(item)

        next_page = response.xpath(".//span[@class='next']/a/@href").extract_first()
        if next_page is not None:
            next_page = 'https://book.douban.com' + next_page
            yield scrapy.Request(
                next_page,
                callback=self.parse_list,
                meta={"item": deepcopy(item)}
            )

            yield item


希望大家指导下,谢谢
  • 写回答

1条回答 默认 最新

  • CSDN专家-黄老师 2021-11-21 23:03
    关注

    你用open打开csv文件,然后以字符串格式写入就行了,每个数据之间用英文逗号隔开即可

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

问题事件

  • 系统已结题 12月1日
  • 已采纳回答 11月23日
  • 创建了问题 11月21日

悬赏问题

  • ¥15 一个服务器已经有一个系统了如果用usb再装一个系统,原来的系统会被覆盖掉吗
  • ¥15 使用esm_msa1_t12_100M_UR50S蛋白质语言模型进行零样本预测时,终端显示出了sequence handled的进度条,但是并不出结果就自动终止回到命令提示行了是怎么回事:
  • ¥15 前置放大电路与功率放大电路相连放大倍数出现问题
  • ¥30 关于<main>标签页面跳转的问题
  • ¥80 部署运行web自动化项目
  • ¥15 腾讯云如何建立同一个项目中物模型之间的联系
  • ¥30 VMware 云桌面水印如何添加
  • ¥15 用ns3仿真出5G核心网网元
  • ¥15 matlab答疑 关于海上风电的爬坡事件检测
  • ¥88 python部署量化回测异常问题