请问一下,我需要用python,scrapy框架爬取图二的歌曲名和作者名,图一是我当前的代码,图三是酷狗网页版F12查看到的源代码但是多次尝试之后都是提示表达式无效,有没有高人解答一下
终端返回完整日志在最后
```python
2024-07-10 16:12:56 [scrapy.utils.log] INFO: Scrapy 2.8.0 started (bot: zzh)
2024-07-10 16:12:56 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.4, cssselect 1.1.0, parsel 1.6.0, w3lib 1.21.0, Twisted 22.10.0, Python 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 23.2.0 (OpenSSL 3.0.12 24 Oct 2023), cryptography 41.0.3, Platform Windows-10-10.0.22621-SP0
2024-07-10 16:12:56 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'zzh',
'COOKIES_ENABLED': False,
'FEED_EXPORT_ENCODING': 'utf-8',
'NEWSPIDER_MODULE': 'zzh.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'SPIDER_MODULES': ['zzh.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2024-07-10 16:12:56 [asyncio] DEBUG: Using selector: SelectSelector
2024-07-10 16:12:56 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2024-07-10 16:12:56 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2024-07-10 16:12:56 [scrapy.extensions.telnet] INFO: Telnet Password: 018a58d45a1fe1e6
2024-07-10 16:12:56 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2024-07-10 16:12:56 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2024-07-10 16:12:56 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2024-07-10 16:12:56 [scrapy.middleware] INFO: Enabled item pipelines:
['zzh.pipelines.ZzhPipeline']
2024-07-10 16:12:56 [scrapy.core.engine] INFO: Spider opened
2024-07-10 16:12:57 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-07-10 16:12:57 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2024-07-10 16:12:57 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://www.kugou.com/> from <GET http://www.kugou.com/>
2024-07-10 16:12:57 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.kugou.com/> (referer: None)
2024-07-10 16:12:57 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.kugou.com/> (referer: None)
Traceback (most recent call last):
File "D:\Anaconda\Lib\site-packages\parsel\selector.py", line 254, in xpath
result = xpathev(query, namespaces=nsp,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "src/lxml/etree.pyx", line 1600, in lxml.etree._Element.xpath
File "src/lxml/xpath.pxi", line 305, in lxml.etree.XPathElementEvaluator.__call__
File "src/lxml/xpath.pxi", line 225, in lxml.etree._XPathEvaluatorBase._handle_result
lxml.etree.XPathEvalError: Invalid expression
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Anaconda\Lib\site-packages\scrapy\utils\defer.py", line 257, in iter_errback
yield next(it)
^^^^^^^^
File "D:\Anaconda\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__
return next(self.data)
^^^^^^^^^^^^^^^
File "D:\Anaconda\Lib\site-packages\scrapy\utils\python.py", line 312, in __next__
return next(self.data)
^^^^^^^^^^^^^^^
File "D:\Anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync
for r in iterable:
File "D:\Anaconda\Lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 28, in <genexpr>
return (r for r in result or () if self._filter(r, spider))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync
for r in iterable:
File "D:\Anaconda\Lib\site-packages\scrapy\spidermiddlewares\referer.py", line 353, in <genexpr>
return (self._set_referer(r, response) for r in result or ())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync
for r in iterable:
File "D:\Anaconda\Lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 27, in <genexpr>
return (r for r in result or () if self._filter(r, spider))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync
for r in iterable:
File "D:\Anaconda\Lib\site-packages\scrapy\spidermiddlewares\depth.py", line 31, in <genexpr>
return (r for r in result or () if self._filter(r, response, spider))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\Lib\site-packages\scrapy\core\spidermw.py", line 104, in process_sync
for r in iterable:
File "C:\Users\hp\Desktop\小学期‘\zzh\zzh\spiders\kugou.py", line 11, in parse
item['number'] = response.xpath("//div@[id='rankWrap']/ul/span@[class='pc_temp_num']").extract()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\Lib\site-packages\scrapy\http\response\text.py", line 144, in xpath
return self.selector.xpath(query, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\Lib\site-packages\parsel\selector.py", line 260, in xpath
six.reraise(ValueError, ValueError(msg), sys.exc_info()[2])
'downloader/response_status_count/301': 1,
'elapsed_time_seconds': 0.304636,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2024, 7, 10, 8, 12, 57, 444910),
'httpcompression/response_bytes': 697498,
'httpcompression/response_count': 1,
'log_count/DEBUG': 5,
'log_count/ERROR': 1,
'log_count/INFO': 10,
'response_received_count': 1,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'spider_exceptions/ValueError': 1,
'start_time': datetime.datetime(2024, 7, 10, 8, 12, 57, 140274)}
2024-07-10 16:12:57 [scrapy.core.engine] INFO: Spider closed (finished)
```