安装cryptography报错'openssl/opensslv.h': No such file or directory

aphy.egg-info\PKG-INFO
inks to src\cryptography.egg-info\dependency_links.txt
to src\cryptography.egg-info\requires.txt
mes to src\cryptography.egg-info\top_level.txt
e 'src\cryptography.egg-info\SOURCES.txt'
plate 'MANIFEST.in'
ed directories found matching 'docs_build'
ly-included files matching '*' found under directory 'vectors'
e 'src\cryptography.egg-info\SOURCES.txt'
ode to build\bdist.win32\egg

le 'build\temp.win32-3.6\Release\_padding.c'

le 'build\temp.win32-3.6\Release\_constant_time.c'

le 'build\temp.win32-3.6\Release\_openssl.c'

extension
rosoft Visual Studio 14.0\VC\BIN\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -ID:\Python3.6.7\include -ID:\Python3.6.7\include "-
crosoft Visual Studio 14.0\VC\INCLUDE" "-IC:\Program Files\Windows Kits\10\include\10.0.10240.0\ucrt" "-IC:\Program Files\Window
hared" "-IC:\Program Files\Windows Kits\8.1\include\um" "-IC:\Program Files\Windows Kits\8.1\include\winrt" /Tcbuild\temp.win32-
.c /Fobuild\temp.win32-3.6\Release\build\temp.win32-3.6\Release_openssl.obj

\Release_openssl.c(498): fatal error C1083: Cannot open include file: 'openssl/opensslv.h': No such file or directory
Program Files\Microsoft Visual Studio 14.0\VC\BIN\cl.exe' failed with exit status 2

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
在安装Openstack时,提示如下错误:  In file included from /usr/include/openssl/cms.h:16:0, 2018-11-22 16:33:22.028 |                      from build/temp.linux-x86_64-2.7/_openssl.c:485: 2018-11-22 16:33:22.028 |     /usr/include/openssl/x509.h:552:6: note: expected 'const X509_ALGOR ** {aka const struct X509_algor_st **}' but argument is of type 'X509_ALGOR ** {aka struct X509_algor_st **}' 2018-11-22 16:33:22.028 |      void X509_get0_signature(const ASN1_BIT_STRING **psig, 2018-11-22 16:33:22.028 |           ^~~~~~~~~~~~~~~~~~~ 2018-11-22 16:33:22.028 |     At top level: 2018-11-22 16:33:22.029 |     build/temp.linux-x86_64-2.7/_openssl.c:3492:13: warning: '_ssl_thread_locking_function' defined but not used [-Wunused-function] 2018-11-22 16:33:22.029 |      static void _ssl_thread_locking_function(int mode, int n, const char *file, 2018-11-22 16:33:22.029 |                  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2018-11-22 16:33:22.029 |     error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 2018-11-22 16:33:22.029 | 2018-11-22 16:33:22.029 |     ---------------------------------------- 2018-11-22 16:33:22.070 | Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-dDyHZi/cryptography/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-VeABSz-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-dDyHZi/cryptography/ 2018-11-22 16:33:22.097 | You are using pip version 9.0.3, however version 18.1 is available. 2018-11-22 16:33:22.097 | You should consider upgrading via the 'pip install --upgrade pip' command. 请大神帮忙看看哪里的问题,卡了好几天了,谢谢!
为什么我用scrapy爬取谷歌应用市场却爬取不到内容?
我想用scrapy爬取谷歌应用市场,代码没有报错,但是却爬取不到内容,这是为什么? ``` # -*- coding: utf-8 -*- import scrapy # from scrapy.spiders import CrawlSpider, Rule # from scrapy.linkextractors import LinkExtractor from gp.items import GpItem # from html.parser import HTMLParser as SGMLParser import requests class GoogleSpider(scrapy.Spider): name = 'google' allowed_domains = ['https://play.google.com/'] start_urls = ['https://play.google.com/store/apps/'] ''' rules = [ Rule(LinkExtractor(allow=("https://play\.google\.com/store/apps/details",)), callback='parse_app', follow=True), ] ''' def parse(self, response): selector = scrapy.Selector(response) urls = selector.xpath('//a[@class="LkLjZd ScJHi U8Ww7d xjAeve nMZKrb id-track-click"]/@href').extract() link_flag = 0 links = [] for link in urls: links.append(link) for each in urls: yield scrapy.Request(links[link_flag], callback=self.parse_next, dont_filter=True) link_flag += 1 def parse_next(self, response): selector = scrapy.Selector(response) app_urls = selector.xpath('//div[@class="details"]/a[@class="title"]/@href').extract() print(app_urls) urls = [] for url in app_urls: url = "http://play.google.com" + url print(url) urls.append(url) link_flag = 0 for each in app_urls: yield scrapy.Request(urls[link_flag], callback=self.parse_app, dont_filter=True) link_flag += 1 def parse_app(self, response): item = GpItem() item['app_url'] = response.url item['app_name'] = response.xpath('//div[@itemprop="name"]').xpath('text()').extract() item['app_icon'] = response.xpath('//img[@itempro="image"]/@src') item['app_developer'] = response.xpath('//') print(response.text) yield item ``` terminal运行信息如下: ``` BettyMacbookPro-764:gp zhanjinyang$ scrapy crawl google 2019-11-12 08:46:45 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: gp) 2019-11-12 08:46:45 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.1, Python 3.7.1 (default, Dec 14 2018, 13:28:58) - [Clang 4.0.1 (tags/RELEASE_401/final)], pyOpenSSL 18.0.0 (OpenSSL 1.1.1a 20 Nov 2018), cryptography 2.4.2, Platform Darwin-18.5.0-x86_64-i386-64bit 2019-11-12 08:46:45 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'gp', 'NEWSPIDER_MODULE': 'gp.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['gp.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36'} 2019-11-12 08:46:45 [scrapy.extensions.telnet] INFO: Telnet Password: b2d7dedf1f4a91eb 2019-11-12 08:46:45 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.logstats.LogStats'] 2019-11-12 08:46:45 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2019-11-12 08:46:45 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-11-12 08:46:45 [scrapy.middleware] INFO: Enabled item pipelines: ['gp.pipelines.GpPipeline'] 2019-11-12 08:46:45 [scrapy.core.engine] INFO: Spider opened 2019-11-12 08:46:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-11-12 08:46:45 [py.warnings] WARNING: /anaconda3/lib/python3.7/site-packages/scrapy/spidermiddlewares/offsite.py:61: URLWarning: allowed_domains accepts only domains, not URLs. Ignoring URL entry https://play.google.com/ in allowed_domains. warnings.warn(message, URLWarning) 2019-11-12 08:46:45 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2019-11-12 08:46:45 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://play.google.com/robots.txt> (referer: None) 2019-11-12 08:46:46 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://play.google.com/store/apps/> (referer: None) 2019-11-12 08:46:46 [scrapy.core.engine] INFO: Closing spider (finished) 2019-11-12 08:46:46 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 810, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 232419, 'downloader/response_count': 2, 'downloader/response_status_count/200': 2, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 11, 12, 8, 46, 46, 474543), 'log_count/DEBUG': 2, 'log_count/INFO': 9, 'log_count/WARNING': 1, 'memusage/max': 58175488, 'memusage/startup': 58175488, 'response_received_count': 2, 'robotstxt/request_count': 1, 'robotstxt/response_count': 1, 'robotstxt/response_status_count/200': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2019, 11, 12, 8, 46, 45, 562775)} 2019-11-12 08:46:46 [scrapy.core.engine] INFO: Spider closed (finished) ``` 求助!!!
SNIMissingWarning + InsecurePlatformWarning
用pip安装一些包时遇到 SNIMissingWarning + InsecurePlatformWarning (unbuntu 12.04.5/python 2.7.3/pip 9.0.1) 具体细节如下,请高手指点,多谢! aiboat@ubuntu:/usr/local/bin$ sudo -H pip install pyopenssl ndg-httpsclient pyasn1 --upgrade /usr/local/lib/python2.7/dist-packages/pip/vendor/requests/packages/urllib3/util/ssl.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#snimissingwarning. SNIMissingWarning /usr/local/lib/python2.7/dist-packages/pip/vendor/requests/packages/urllib3/util/ssl.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/security.html#insecureplatformwarning. InsecurePlatformWarning Requirement already up-to-date: pyopenssl in /usr/local/lib/python2.7/dist-packages Requirement already up-to-date: ndg-httpsclient in /usr/local/lib/python2.7/dist-packages Requirement already up-to-date: pyasn1 in /usr/local/lib/python2.7/dist-packages Requirement already up-to-date: cryptography>=1.9 in /usr/local/lib/python2.7/dist-packages (from pyopenssl) Requirement already up-to-date: six>=1.5.2 in /usr/local/lib/python2.7/dist-packages (from pyopenssl) Requirement already up-to-date: asn1crypto>=0.21.0 in /usr/local/lib/python2.7/dist-packages (from cryptography>=1.9->pyopenssl) Requirement already up-to-date: cffi>=1.7 in /usr/local/lib/python2.7/dist-packages (from cryptography>=1.9->pyopenssl) Requirement already up-to-date: ipaddress in /usr/local/lib/python2.7/dist-packages (from cryptography>=1.9->pyopenssl) Requirement already up-to-date: enum34 in /usr/local/lib/python2.7/dist-packages (from cryptography>=1.9->pyopenssl) Requirement already up-to-date: idna>=2.1 in /usr/local/lib/python2.7/dist-packages (from cryptography>=1.9->pyopenssl) Requirement already up-to-date: pycparser in /usr/local/lib/python2.7/dist-packages (from cffi>=1.7->cryptography>=1.9->pyopenssl)
请问scrapy为什么会爬取失败
C:\Users\Administrator\Desktop\新建文件夹\xiaozhu>python -m scrapy crawl xiaozhu 2019-10-26 11:43:11 [scrapy.utils.log] INFO: Scrapy 1.7.3 started (bot: xiaozhu) 2019-10-26 11:43:11 [scrapy.utils.log] INFO: Versions: lxml 4.4.1.0, libxml2 2.9 .5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.7.0, Python 3.5.3 (v 3.5.3:1880cb95a742, Jan 16 2017, 15:51:26) [MSC v.1900 32 bit (Intel)], pyOpenSS L 19.0.0 (OpenSSL 1.1.1c 28 May 2019), cryptography 2.7, Platform Windows-7-6.1 .7601-SP1 2019-10-26 11:43:11 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'xi aozhu', 'SPIDER_MODULES': ['xiaozhu.spiders'], 'NEWSPIDER_MODULE': 'xiaozhu.spid ers'} 2019-10-26 11:43:11 [scrapy.extensions.telnet] INFO: Telnet Password: c61bda45d6 3b8138 2019-10-26 11:43:11 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.logstats.LogStats'] 2019-10-26 11:43:12 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2019-10-26 11:43:12 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-10-26 11:43:12 [scrapy.middleware] INFO: Enabled item pipelines: [] 2019-10-26 11:43:12 [scrapy.core.engine] INFO: Spider opened 2019-10-26 11:43:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pag es/min), scraped 0 items (at 0 items/min) 2019-10-26 11:43:12 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2019-10-26 11:43:12 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting ( 307) to <GET https://bizverify.xiaozhu.com?slideRedirect=https%3A%2F%2Fbj.xiaozh u.com%2Ffangzi%2F125535477903.html> from <GET http://bj.xiaozhu.com/fangzi/12553 5477903.html> 2019-10-26 11:43:12 [scrapy.core.engine] DEBUG: Crawled (400) <GET https://bizve rify.xiaozhu.com?slideRedirect=https%3A%2F%2Fbj.xiaozhu.com%2Ffangzi%2F125535477 903.html> (referer: None) 2019-10-26 11:43:12 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <400 https://bizverify.xiaozhu.com?slideRedirect=https%3A%2F%2Fbj.xiaozhu.com%2 Ffangzi%2F125535477903.html>: HTTP status code is not handled or not allowed 2019-10-26 11:43:12 [scrapy.core.engine] INFO: Closing spider (finished) 2019-10-26 11:43:12 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 529, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 725, 'downloader/response_count': 2, 'downloader/response_status_count/307': 1, 'downloader/response_status_count/400': 1, 'elapsed_time_seconds': 0.427734, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 10, 26, 3, 43, 12, 889648), 'httperror/response_ignored_count': 1, 'httperror/response_ignored_status_count/400': 1, 'log_count/DEBUG': 2, 'log_count/INFO': 11, 'response_received_count': 1, 'scheduler/dequeued': 2, 'scheduler/dequeued/memory': 2, 'scheduler/enqueued': 2, 'scheduler/enqueued/memory': 2, 'start_time': datetime.datetime(2019, 10, 26, 3, 43, 12, 461914)} 2019-10-26 11:43:12 [scrapy.core.engine] INFO: Spider closed (finished)
python scrapy 爬虫图片新手求助
求问大神 我这个data她怎么了 报错: 2020-02-07 09:24:55 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: meizitu) 2020-02-07 09:24:55 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 22:22:05) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d 10 Sep 2019), cryptography 2.8, Platform Windows-10-10.0.17763-SP0 2020-02-07 09:24:55 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'meizitu', 'NEWSPIDER_MODULE': 'meizitu.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['meizitu.spiders']} 2020-02-07 09:24:55 [scrapy.extensions.telnet] INFO: Telnet Password: 0936097982b9bcc8 2020-02-07 09:24:55 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2020-02-07 09:24:56 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2020-02-07 09:24:56 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] Unhandled error in Deferred: 2020-02-07 09:24:56 [twisted] CRITICAL: Unhandled error in Deferred: Traceback (most recent call last): File "e:\python3.7\lib\site-packages\scrapy\crawler.py", line 184, in crawl return self._crawl(crawler, *args, **kwargs) File "e:\python3.7\lib\site-packages\scrapy\crawler.py", line 188, in _crawl d = crawler.crawl(*args, **kwargs) File "e:\python3.7\lib\site-packages\twisted\internet\defer.py", line 1613, in unwindGenerator return _cancellableInlineCallbacks(gen) File "e:\python3.7\lib\site-packages\twisted\internet\defer.py", line 1529, in _cancellableInlineCallbacks _inlineCallbacks(None, g, status) --- <exception caught here> --- File "e:\python3.7\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "e:\python3.7\lib\site-packages\scrapy\crawler.py", line 86, in crawl self.engine = self._create_engine() File "e:\python3.7\lib\site-packages\scrapy\crawler.py", line 111, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File "e:\python3.7\lib\site-packages\scrapy\core\engine.py", line 70, in __init__ self.scraper = Scraper(crawler) File "e:\python3.7\lib\site-packages\scrapy\core\scraper.py", line 71, in __init__ self.itemproc = itemproc_cls.from_crawler(crawler) File "e:\python3.7\lib\site-packages\scrapy\middleware.py", line 53, in from_crawler return cls.from_settings(crawler.settings, crawler) File "e:\python3.7\lib\site-packages\scrapy\middleware.py", line 34, in from_settings mwcls = load_object(clspath) File "e:\python3.7\lib\site-packages\scrapy\utils\misc.py", line 46, in load_object mod = import_module(module) File "e:\python3.7\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 724, in exec_module File "<frozen importlib._bootstrap_external>", line 860, in get_code File "<frozen importlib._bootstrap_external>", line 791, in source_to_code File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed builtins.SyntaxError: unexpected EOF while parsing (pipelines.py, line 22) 2020-02-07 09:24:56 [twisted] CRITICAL: Traceback (most recent call last): File "e:\python3.7\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks result = g.send(result) File "e:\python3.7\lib\site-packages\scrapy\crawler.py", line 86, in crawl self.engine = self._create_engine() File "e:\python3.7\lib\site-packages\scrapy\crawler.py", line 111, in _create_engine return ExecutionEngine(self, lambda _: self.stop()) File "e:\python3.7\lib\site-packages\scrapy\core\engine.py", line 70, in __init__ self.scraper = Scraper(crawler) File "e:\python3.7\lib\site-packages\scrapy\core\scraper.py", line 71, in __init__ self.itemproc = itemproc_cls.from_crawler(crawler) File "e:\python3.7\lib\site-packages\scrapy\middleware.py", line 53, in from_crawler return cls.from_settings(crawler.settings, crawler) File "e:\python3.7\lib\site-packages\scrapy\middleware.py", line 34, in from_settings mwcls = load_object(clspath) File "e:\python3.7\lib\site-packages\scrapy\utils\misc.py", line 46, in load_object mod = import_module(module) File "e:\python3.7\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 724, in exec_module File "<frozen importlib._bootstrap_external>", line 860, in get_code File "<frozen importlib._bootstrap_external>", line 791, in source_to_code File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "E:\python_work\爬虫\meizitu\meizitu\pipelines.py", line 22 f.write(data) ^ SyntaxError: unexpected EOF while parsing 代码如下: pipeline ``` import requests class MeizituPipeline(object): def process_item(self, item, spider): print("main_title:",item['main_title']) print("main_image:", item['main_image']) print("main_tags:", item['main_tags']) print("main_meta:", item['main_meta']) print("page:", item['main_pagenavi']) url = requests.get(item['main_image']) print(url) try: with open(item['main_pagenavi'] +'.jpg','wb') as f: data = url.read() f.write(data) ``` image.py ``` import scrapy from scrapy.http import response from ..items import MeizituItem class ImageSpider(scrapy.Spider): #定义Spider的名字scrapy crawl meiaitu name = 'SpiderMain' #允许爬虫的域名 allowed_domains = ['www.mzitu.com/203554'] #爬取的首页列表 start_urls = ['https://www.mzitu.com/203554'] #负责提取response的信息 #response代表下载器从start_urls中的url的到的回应 #提取的信息 def parse(self,response): #遍历所有节点 for Main in response.xpath('//div[@class = "main"]'): item = MeizituItem() #匹配所有节点元素/html/body/div[2]/div[1]/div[3]/p/a content = Main.xpath('//div[@class = "content"]') item['main_title'] = content.xpath('./h2/text()') item['main_image'] = content.xpath('./div[@class="main-image"]/p/a/img') item['main_meta'] = content.xpath('./div[@class="main-meta"]/span/text()').extract() item['main_tags'] = content.xpath('./div[@class="main-tags"]/a/text()').extract() item['main_pagenavi'] = content.xpath('./div[@class="main_pagenavi"]/span/text()').extract_first() yield item new_links = response.xpath('.//div[@class="pagenavi"]/a/@href').extract() new_link =new_links[-1] yield scrapy.Request(new_link,callback=self.parse) ``` setting ``` BOT_NAME = 'meizitu' SPIDER_MODULES = ['meizitu.spiders'] NEWSPIDER_MODULE = 'meizitu.spiders' ROBOTSTXT_OBEY = True #配置默认请求头 DEFAULT_REQUEST_HEADERS = { "User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.108 Safari/537.36", 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' } ITEM_PIPELINES = { 'meizitu.pipelines.MeizituPipeline':300, } IMAGES_STORE = 'E:\python_work\爬虫\meizitu' IMAGES_MIN_HEIGHT = 1050 IMAGES_MIN_WIDTH = 700 ```
求大神解决Sage的一点编程问题
最近在用Sage 写LWE的实现, 在Sage里面可以直接得到LWE samples, 可是请问怎么能得到LWE 里面的secret key **s**呢? 我看了sage的文档,里面没提到怎么调用变量secret key. 求大神指点一二。 附:Sage 关于LWE的文档说明: http://doc.sagemath.org/html/en/reference/cryptography/sage/crypto/lwe.html
用oss上传本地文件提示路径错误
本人小白 请各位大神指教一下 我在vs里面连接oss然后上传本地文件为什么会一直提示路径错误呀 ![图片说明](https://img-ask.csdn.net/upload/202002/11/1581405964_688665.jpg) ``` using System; using System.IO; using System.Threading; using Aliyun.OSS.Common; using System.Text; using Aliyun.OSS.Util; using System.Security.Cryptography; namespace Aliyun.OSS.Samples { /// <summary> /// Sample for putting object. /// </summary> public static class PutObjectSample { static string accessKeyId = "*** "; static string accessKeySecret = "*** "; static string endpoint = "*** "; static OssClient client = new OssClient(endpoint, accessKeyId, accessKeySecret); static string fileToUpload = "‪D:\\abc\\1.txt"; static AutoResetEvent _event = new AutoResetEvent(false); static HashAlgorithm hashAlgorithm = new MD5CryptoServiceProvider(); /// <summary> /// sample for put object to oss /// </summary> public static void PutObject(string bucketName) { PutObjectFromFile(bucketName); } public static void PutObjectFromFile(string bucketName) { const string key = "PutObjectFromFile"; try { client.PutObject(bucketName, key, fileToUpload); Console.WriteLine("Put object:{0} succeeded", key); } catch (OssException ex) { Console.WriteLine("Failed with error code: {0}; Error info: {1}. \nRequestID:{2}\tHostID:{3}", ex.ErrorCode, ex.Message, ex.RequestId, ex.HostId); } catch (Exception ex) { Console.WriteLine("Failed with error info: {0}", ex.Message); } } } } ```
C# Winform 自动安装p12 证书 提示拒绝访问
代码: //添加个人证书 X509Certificate2 certificate = new X509Certificate2("C:\\test.p12","证书密码"); X509Store store = new X509Store(StoreName.My, StoreLocation.CurrentUser); store.Open(OpenFlags.ReadWrite); store.Remove(certificate); //可省略 store.Add(certificate); store.Close(); store.Open(OpenFlags.ReadWrite); 这就话报错 错误提示: System.Security.Cryptography.CryptographicException: 拒绝访问。 在 System.Security.Cryptography.X509Certificates.X509Store.Open(OpenFlags flags) 在 PcHealthDoctor.MainForm.MainForm_Load(Object sender, EventArgs e) 位置 D:\PcHealthDoctor\PcHealthDoctor\MainForm.cs:行号 208
Tensorflow测试训练styleGAN时报错 No OpKernel was registered to support Op 'NcclAllReduce' with these attrs.
在测试官方StyleGAN。 运行官方与训练模型pretrained_example.py generate_figures.py 没有问题。GPU工作正常。 运行train.py时报错 尝试只用单个GPU训练时没有报错。 NcclAllReduce应该跟多GPU通信有关,不太了解。 InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] 经过多番google 尝试过 重启 conda install keras-gpu 重新安装tensorflow-gpu==1.10.0(跟官方版本保持一致) ``` …… Building TensorFlow graph... Setting up snapshot image grid... Setting up run dir... Training... Traceback (most recent call last): File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1278, in _do_call return fn(*args) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1263, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "train.py", line 191, in <module> main() File "train.py", line 186, in main dnnlib.submit_run(**kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 290, in submit_run run_wrapper(submit_config) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 242, in run_wrapper util.call_func_by_name(func_name=submit_config.run_func_name, submit_config=submit_config, **submit_config.run_func_kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\util.py", line 257, in call_func_by_name return func_obj(*args, **kwargs) File "E:\MachineLearning\stylegan-master\training\training_loop.py", line 230, in training_loop tflib.run([D_train_op, Gs_update_op], {lod_in: sched.lod, lrate_in: sched.D_lrate, minibatch_in: sched.minibatch}) File "E:\MachineLearning\stylegan-master\dnnlib\tflib\tfutil.py", line 26, in run return tf.get_default_session().run(*args, **kwargs) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 877, in run run_metadata_ptr) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1100, in _run feed_dict_tensor, options, run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1272, in _do_run run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1291, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] Caused by op 'TrainD/SumAcrossGPUs/NcclAllReduce', defined at: File "train.py", line 191, in <module> main() File "train.py", line 186, in main dnnlib.submit_run(**kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 290, in submit_run run_wrapper(submit_config) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 242, in run_wrapper util.call_func_by_name(func_name=submit_config.run_func_name, submit_config=submit_config, **submit_config.run_func_kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\util.py", line 257, in call_func_by_name return func_obj(*args, **kwargs) File "E:\MachineLearning\stylegan-master\training\training_loop.py", line 185, in training_loop D_train_op = D_opt.apply_updates() File "E:\MachineLearning\stylegan-master\dnnlib\tflib\optimizer.py", line 135, in apply_updates g = nccl_ops.all_sum(g) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\python\ops\nccl_ops.py", line 49, in all_sum return _apply_all_reduce('sum', tensors) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\python\ops\nccl_ops.py", line 230, in _apply_all_reduce shared_name=shared_name)) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\ops\gen_nccl_ops.py", line 59, in nccl_all_reduce num_devices=num_devices, shared_name=shared_name, name=name) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\util\deprecation.py", line 454, in new_func return func(*args, **kwargs) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\ops.py", line 3156, in create_op op_def=op_def) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\ops.py", line 1718, in __init__ self._traceback = tf_stack.extract_stack() InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] ``` ``` #conda list: # Name Version Build Channel _tflow_select 2.1.0 gpu absl-py 0.8.1 pypi_0 pypi alabaster 0.7.12 py36_0 asn1crypto 1.2.0 py36_0 astor 0.8.0 pypi_0 pypi astroid 2.3.2 py36_0 attrs 19.3.0 py_0 babel 2.7.0 py_0 backcall 0.1.0 py36_0 blas 1.0 mkl bleach 3.1.0 py36_0 ca-certificates 2019.10.16 0 certifi 2019.9.11 py36_0 cffi 1.13.1 py36h7a1dbc1_0 chardet 3.0.4 py36_1003 cloudpickle 1.2.2 py_0 colorama 0.4.1 py36_0 cryptography 2.8 py36h7a1dbc1_0 cudatoolkit 9.0 1 cudnn 7.6.4 cuda9.0_0 decorator 4.4.1 py_0 defusedxml 0.6.0 py_0 django 2.2.7 pypi_0 pypi docutils 0.15.2 py36_0 entrypoints 0.3 py36_0 gast 0.3.2 py_0 grpcio 1.25.0 pypi_0 pypi h5py 2.9.0 py36h5e291fa_0 hdf5 1.10.4 h7ebc959_0 icc_rt 2019.0.0 h0cc432a_1 icu 58.2 ha66f8fd_1 idna 2.8 pypi_0 pypi image 1.5.27 pypi_0 pypi imagesize 1.1.0 py36_0 importlib_metadata 0.23 py36_0 intel-openmp 2019.4 245 ipykernel 5.1.3 py36h39e3cac_0 ipython 7.9.0 py36h39e3cac_0 ipython_genutils 0.2.0 py36h3c5d0ee_0 isort 4.3.21 py36_0 jedi 0.15.1 py36_0 jinja2 2.10.3 py_0 jpeg 9b hb83a4c4_2 jsonschema 3.1.1 py36_0 jupyter_client 5.3.4 py36_0 jupyter_core 4.6.1 py36_0 keras-applications 1.0.8 py_0 keras-base 2.2.4 py36_0 keras-gpu 2.2.4 0 keras-preprocessing 1.1.0 py_1 keyring 18.0.0 py36_0 lazy-object-proxy 1.4.3 py36he774522_0 libpng 1.6.37 h2a8f88b_0 libprotobuf 3.9.2 h7bd577a_0 libsodium 1.0.16 h9d3ae62_0 markdown 3.1.1 py36_0 markupsafe 1.1.1 py36he774522_0 mccabe 0.6.1 py36_1 mistune 0.8.4 py36he774522_0 mkl 2019.4 245 mkl-service 2.3.0 py36hb782905_0 mkl_fft 1.0.15 py36h14836fe_0 mkl_random 1.1.0 py36h675688f_0 more-itertools 7.2.0 py36_0 nbconvert 5.6.1 py36_0 nbformat 4.4.0 py36h3a5bc1b_0 numpy 1.17.3 py36h4ceb530_0 numpy-base 1.17.3 py36hc3f5095_0 numpydoc 0.9.1 py_0 openssl 1.1.1d he774522_3 packaging 19.2 py_0 pandoc 2.2.3.2 0 pandocfilters 1.4.2 py36_1 parso 0.5.1 py_0 pickleshare 0.7.5 py36_0 pillow 6.2.1 pypi_0 pypi pip 19.3.1 py36_0 prompt_toolkit 2.0.10 py_0 protobuf 3.10.0 pypi_0 pypi psutil 5.6.3 py36he774522_0 pycodestyle 2.5.0 py36_0 pycparser 2.19 py36_0 pyflakes 2.1.1 py36_0 pygments 2.4.2 py_0 pylint 2.4.3 py36_0 pyopenssl 19.0.0 py36_0 pyparsing 2.4.2 py_0 pyqt 5.9.2 py36h6538335_2 pyreadline 2.1 py36_1 pyrsistent 0.15.4 py36he774522_0 pysocks 1.7.1 py36_0 python 3.6.9 h5500b2f_0 python-dateutil 2.8.1 py_0 pytz 2019.3 py_0 pywin32 223 py36hfa6e2cd_1 pyyaml 5.1.2 py36he774522_0 pyzmq 18.1.0 py36ha925a31_0 qt 5.9.7 vc14h73c81de_0 qtawesome 0.6.0 py_0 qtconsole 4.5.5 py_0 qtpy 1.9.0 py_0 requests 2.22.0 py36_0 rope 0.14.0 py_0 scipy 1.3.1 py36h29ff71c_0 setuptools 39.1.0 pypi_0 pypi sip 4.19.8 py36h6538335_0 six 1.13.0 pypi_0 pypi snowballstemmer 2.0.0 py_0 sphinx 2.2.1 py_0 sphinxcontrib-applehelp 1.0.1 py_0 sphinxcontrib-devhelp 1.0.1 py_0 sphinxcontrib-htmlhelp 1.0.2 py_0 sphinxcontrib-jsmath 1.0.1 py_0 sphinxcontrib-qthelp 1.0.2 py_0 sphinxcontrib-serializinghtml 1.1.3 py_0 spyder 3.3.6 py36_0 spyder-kernels 0.5.2 py36_0 sqlite 3.30.1 he774522_0 sqlparse 0.3.0 pypi_0 pypi tensorboard 1.10.0 py36he025d50_0 tensorflow 1.10.0 gpu_py36h3514669_0 tensorflow-base 1.10.0 gpu_py36h6e53903_0 tensorflow-gpu 1.10.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi testpath 0.4.2 py36_0 tornado 6.0.3 py36he774522_0 traitlets 4.3.3 py36_0 typed-ast 1.4.0 py36he774522_0 urllib3 1.25.6 pypi_0 pypi vc 14.1 h0510ff6_4 vs2015_runtime 14.16.27012 hf0eaf9b_0 wcwidth 0.1.7 py36h3d5aa90_0 webencodings 0.5.1 py36_1 werkzeug 0.16.0 py_0 wheel 0.33.6 py36_0 win_inet_pton 1.1.0 py36_0 wincertstore 0.2 py36h7fe50ca_0 wrapt 1.11.2 py36he774522_0 yaml 0.1.7 hc54c509_2 zeromq 4.3.1 h33f27b4_3 zipp 0.6.0 py_0 zlib 1.2.11 h62dcd97_3 ``` 2*RTX2080Ti driver 4.19.67
xamarin创建项目无法运行
新建了一个新项目都不能运行,报错,之前是好的。 ![图片说明](https://img-ask.csdn.net/upload/201604/09/1460200755_739999.png) 严重性 代码 说明 项目 文件 行 禁止显示状态 错误 “GetAdditionalResourcesFromAssemblies”任务的声明或使用不正确,或在构造过程中失败。请检查任务名称和程序集名称的拼写是否正确。 App1 严重性 代码 说明 项目 文件 行 禁止显示状态 错误 未能从“C:\Program Files (x86)\MSBuild\Xamarin\Android\Xamarin.Android.Build.Tasks.dll”实例化“GetAdditionalResourcesFromAssemblies”任务。 System.Reflection.TargetInvocationException: 调用的目标发生了异常。 ---> System.InvalidOperationException: 此实现不是 Windows 平台 FIPS 验证的加密算法的一部分。 在 System.Security.Cryptography.MD5CryptoServiceProvider..ctor() --- 内部异常堆栈跟踪的结尾 --- 在 System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor) 在 System.Reflection.RuntimeConstructorInfo.Invoke(BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) 在 System.Security.Cryptography.CryptoConfig.CreateFromName(String name, Object[] args) 在 System.Security.Cryptography.MD5.Create() 在 Xamarin.Android.Tasks.GetAdditionalResourcesFromAssemblies..ctor() App1
C# 用户控件的Load事件不能执行
using System; using System.Collections.Generic; using System.ComponentModel; using System.Drawing; using System.Data; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; using System.Net; using System.Net.Sockets; using System.Security.Cryptography; using System.IO; using OpenPOP.POP3; using System.Collections; using OpenPOP.MIMEParser; using System.Threading; using mymail.sqlDao; using System.Data.SqlClient; using mymail; namespace mymail { public partial class inbox : UserControl { public inbox() { InitializeComponent(); } private void inbox_Load(object sender, EventArgs e) { msgs.Clear(); //listMessages.Nodes.Clear();//邮件列表 listAttachments.Nodes.Clear();//附件列表 listView1.Clear();//清除空间中所有项 //button2.Enabled = false; //button3.Enabled = false; //ReceiveMails(); }
关于C# AES加密的PaddingMode的问题
public static string AesEncrypt(string str, string key, string iv) { if (string.IsNullOrEmpty(str)) return null; Byte[] toEncryptArray = Encoding.UTF8.GetBytes(str); byte[] keyArr = Encoding.UTF8.GetBytes(key); byte[] ivArr = Encoding.UTF8.GetBytes(iv); System.Security.Cryptography.RijndaelManaged rm = new System.Security.Cryptography.RijndaelManaged(); /***************************这里有问题*****************************/ rm.Padding = System.Security.Cryptography.PaddingMode.None; /******************************************************************/ rm.Mode = System.Security.Cryptography.CipherMode.CFB; rm.BlockSize = 128; System.Security.Cryptography.ICryptoTransform cTransform = rm.CreateEncryptor(keyArr, ivArr); Byte[] resultArray = cTransform.TransformFinalBlock(toEncryptArray, 0, toEncryptArray.Length); return Convert.ToBase64String(resultArray, 0, resultArray.Length); } ``` ``` 方法调用 String str = "appid=pm93c050eb69884a50&appsecret=b24ee0ef-73be-4a9a-a884-e0c0e27c&timestamp=1489732369"; string aa = AesEncrypt(str, "dha@wjw$pvms9wwl", "8807599889957088"); PaddingMode设置成PaddingMode.None 不填充时 总会报错 有人知道这个问题怎么解决吗 ``` ```![图片说明](https://img-ask.csdn.net/upload/201703/21/1490027098_177459.png)
中国银行接口后台POST提交
中国银行退款接口,在后台POST提交不走页面。在请求是报错请求被中止: 未能创建 SSL/TLS 安全通道。代码如下 string Number = CreateRandomCode(30); posturl = "https://ebspay.boc.cn/PGWPortal/RefundOrder.do"; pastData = ChinaBlankConfig.merchantNo + "|" + Number + "|" + "001" + "|" + 0.01 + "|" + "20121206SHKF000004"; byte[] _bytedata = System.Text.Encoding.UTF8.GetBytes(pastData); string _strSignData = PKCS7Tool.SignatureMessage(cerPath, cerWord, _bytedata, string.Empty); string _singDate = "merchantNo=" + ChinaBlankConfig.merchantNo + "&mRefundSeq=" + Number + "&curCode=001" + "&refundAmount=" + 0.01 + "&orderNo=20121206SHKF000004" + "&signData=" + _strSignData; string str = RequestAndResponse(posturl, _singDate); //初始化请求对象 var request = WebRequest.Create(url) as HttpWebRequest; //X509证书 //X509Certificate2 cert = new System.Security.Cryptography.X509Certificates.X509Certificate2(cerPath, "111111", X509KeyStorageFlags.MachineKeySet); //request.ClientCertificates.Add(cert); X509Certificate cert = new System.Security.Cryptography.X509Certificates.X509Certificate(cerPath, "111111"); X509Certificate cert2 = new System.Security.Cryptography.X509Certificates.X509Certificate(keyStorePath); request.ClientCertificates.Add(cert2); request.ClientCertificates.Add(cert); request.ImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Impersonation; //设定验证回调(总是同意) ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback(CheckValidationResult); request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; request.ContentLength = Encoding.UTF8.GetByteCount(postData); request.Timeout = 300000; request.ReadWriteTimeout = 300000; //将来数据内容写入传输数据流中,并传输 using (var requestStream = request.GetRequestStream()) { using (var streamWriter = new StreamWriter(requestStream)) { streamWriter.Write(postData); } } //初始化响应对象 var response = request.GetResponse() as HttpWebResponse; //从传输流中对象返回的数据 using (var responseStream = response.GetResponseStream()) { using (var streamReader = new StreamReader(responseStream)) { return streamReader.ReadToEnd(); } }
为什么在使用catalyst 时候一直有提示错误ImportError: cannot import name 'run_algorithm'?
如题: 以下为我的环境: py 3.6 aiodns==1.1.1 aiohttp==3.5.4 alabaster==0.7.12 alembic==0.9.7 appnope==0.1.0 asn1crypto==0.24.0 astroid==2.2.5 async-timeout==3.0.1 attrdict==2.0.1 attrs==19.1.0 Babel==2.6.0 backcall==0.1.0 bcolz==1.2.1 bleach==3.1.0 boto3==1.5.27 botocore==1.8.50 Bottleneck==1.2.1 cchardet==2.1.1 ccxt==1.17.94 certifi==2019.3.9 cffi==1.12.3 chardet==3.0.4 click==6.7 cloudpickle==1.0.0 contextlib2==0.5.5 cryptography==2.6.1 cycler==0.10.0 cyordereddict==1.0.0 Cython==0.27.3 cytoolz==0.9.0.1 decorator==4.4.0 defusedxml==0.6.0 docutils==0.14 empyrical==0.2.2 enigma-catalyst==0.5.21 entrypoints==0.3 eth-abi==1.3.0 eth-account==0.2.3 eth-hash==0.2.0 eth-keyfile==0.5.1 eth-keys==0.2.2 eth-rlp==0.1.2 eth-typing==2.1.0 eth-utils==1.6.0 hexbytes==0.1.0 idna==2.8 idna-ssl==1.1.0 imagesize==1.1.0 inflection==0.3.1 intervaltree==2.1.0 ipykernel==5.1.0 ipython==7.5.0 ipython-genutils==0.2.0 isort==4.3.19 jedi==0.13.3 Jinja2==2.10.1 jmespath==0.9.4 jsonschema==3.0.1 jupyter-client==5.2.4 jupyter-core==4.4.0 keyring==18.0.0 kiwisolver==1.1.0 lazy-object-proxy==1.4.1 Logbook==0.12.5 lru-dict==1.1.6 lxml==4.3.3 Mako==1.0.7 MarkupSafe==1.1.1 matplotlib==3.1.0 mccabe==0.6.1 mistune==0.8.4 mkl-fft==1.0.12 mkl-random==1.0.2 more-itertools==7.0.0 multidict==4.5.2 multipledispatch==0.4.9 nbconvert==5.5.0 nbformat==4.4.0 networkx==2.1 numexpr==2.6.4 numpy==1.16.0 numpydoc==0.9.1 packaging==19.0 pandas==0.24.2 pandas-datareader==0.6.0 pandocfilters==1.4.2 parsimonious==0.8.1 parso==0.4.0 patsy==0.5.1 pexpect==4.7.0 pickleshare==0.7.5 prompt-toolkit==2.0.9 psutil==5.6.2 ptyprocess==0.6.0 pycares==3.0.0 pycodestyle==2.5.0 pycparser==2.19 pycryptodome==3.8.2 pyflakes==2.1.1 Pygments==2.4.0 pylint==2.3.1 pyOpenSSL==19.0.0 pyparsing==2.4.0 pyrsistent==0.14.11 PySocks==1.7.0 python-dateutil==2.8.0 python-editor==1.0.4 pytz==2019.1 pyzmq==18.0.0 QtAwesome==0.5.7 qtconsole==4.5.1 QtPy==1.7.1 Quandl==3.4.5 redo==2.0.1 requests==2.21.0 requests-file==1.4.3 requests-ftp==0.3.1 requests-toolbelt==0.8.0 rlp==1.1.0 rope==0.14.0 s3transfer==0.1.13 scipy==1.2.1 six==1.12.0 snowballstemmer==1.2.1 sortedcontainers==1.5.9 Sphinx==2.0.1 sphinxcontrib-applehelp==1.0.1 sphinxcontrib-devhelp==1.0.1 sphinxcontrib-htmlhelp==1.0.2 sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.2 sphinxcontrib-serializinghtml==1.1.3 spyder==3.3.4 spyder-kernels==0.4.4 SQLAlchemy==1.2.2 statsmodels==0.9.0 tables==3.4.2 testpath==0.4.2 toolz==0.9.0 tornado==6.0.2 traitlets==4.3.2 typed-ast==1.3.4 typing-extensions==3.7.2 urllib3==1.24.3 wcwidth==0.1.7 web3==4.4.1 webencodings==0.5.1 websockets==5.0.1 wrapt==1.11.1 wurlitzer==1.0.2 yarl==1.1.0 在运行catalyst 的时候会提示: runfile('/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency/trading.py', wdir='/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency') Traceback (most recent call last): File "<ipython-input-10-5dde7acc5e52>", line 1, in <module> runfile('/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency/trading.py', wdir='/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency') File "/Users/mac/miniconda3/envs/catalyst/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "/Users/mac/miniconda3/envs/catalyst/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency/trading.py", line 6, in <module> from catalyst import run_algorithm File "/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency/catalyst.py", line 1, in <module> from catalyst import run_algorithm ImportError: cannot import name 'run_algorithm' 我在网上找了很久的解决方案但是都没有一个能解决到的。 会不会是因为在安装catalyst的时候就已经出了这个问题所导致的? 以下为我在安装的时候发生的错误。 请各位大神帮帮忙! ERROR: Cannot uninstall 'certifi'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. Note: you may need to restart the kernel to use updated packages.
NTP服务器问题,麻烦大神帮忙看一下。
下面是ntp的服务器的配置文件。 目前遇到同步时时间是对的,过一段时间就会慢30分钟。不知与本NTP服务器有关联,本配置文件是否正确麻烦看一下。 ``` # For more information about this file, see the man pages # ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5). driftfile /var/lib/ntp/drift # Permit time synchronization with our time source, but do not # permit the source to query or modify the service on this system. restrict default kod nomodify notrap nopeer noquery restrict -6 default kod nomodify notrap nopeer noquery # Permit all access over the loopback interface. This could # be tightened as well, but to do so would effect some of # the administrative functions. restrict 127.0.0.1 restrict -6 ::1 # Hosts on local network are less restricted. #restrict 10.200.0.6 mask 255.255.255.0 # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server ntp.ntsc.ac.cn prefer server time.windows.com server time.nist.gov server pool.ntp.org server ntp.neu.edu.cn server ntp.gwadar.cn server 0.centos.pool.ntp.org server 1.centos.pool.ntp.org #broadcast 10.200.0.6 autokey # broadcast server #broadcastclient # broadcast client #broadcast 224.0.1.1 autokey # multicast server #multicastclient 224.0.1.1 # multicast client #manycastserver 239.255.254.254 # manycast server #manycastclient 239.255.254.254 autokey # manycast client #allow update time by the upper server restrict ntp.ntsc.ac.cn restrict time.windows.com restrict pool.ntp.org restrict ntp.neu.edu.cn restrict ntp.gwadar.cn restrict 0.centos.pool.ntp.org restrict 1.centos.pool.ntp.org restrict time.nist.gov restrict 127.0.0.1 restrict -6 ::1 #undisciplinet local clock.this is a fake driver intended for backup #and when no outside source of synchronized time is available. server 127.127.1.0 #local clock fudge 127.127.1.0 stratum 10 # Enable public key cryptography. #crypto includefile /etc/ntp/crypto/pw # Key file containing the keys and key identifiers used when operating # with symmetric key cryptography. keys /etc/ntp/keys # Specify the key identifiers which are trusted. #trustedkey 4 8 42 # Specify the key identifier to use with the ntpdc utility. #requestkey 8 # Specify the key identifier to use with the ntpq utility. #controlkey 8 # Enable writing of statistics records. #statistics clockstats cryptostats loopstats peerstats ```
C# 关于数字签名的一些问题请教
string date = HttpServiceByForm("http://192.168.1.1:8888:/accessToken/nonce", "openId=openApiTest"); string[] sArray = date.Split('"'); byte[] rgbHash = Convert.FromBase64String(sArray[7]); X509Certificate2 objx5092 = new X509Certificate2(@"..\Plugs\certifivate\服务通讯证书.pfx", "1234"); RSACryptoServiceProvider rsa = objx5092.PrivateKey as System.Security.Cryptography.RSACryptoServiceProvider;10 byte[] rgbHash = Convert.FromBase64String(sArray[7]);11 //md5 你懂的 返回byte[] byte[] bb = rsa.SignData(rgbHasH, "MD5"); //将md5 之后的数据进行base64编码 必须的 返回的就是已签名的数据 string signature = System.Convert.ToBase64String(bb); //string xmlprivate = objx5092.PrivateKey.ToXmlString(true); return signature;//将签名转化base64 return byteToHexStr(bb);///将签名结果转化为16进制字符串 } 代码在上面,现在我要签名后的格式是 c0acd8832b574243b5938afc183ec760 这样的 但是 上面的代码的结果是128位数据 各位大神求教
Random Walking 程序思路
Problem Description The Army of Coin-tossing Monkeys (ACM) is in the business of producing randomness. Good random numbers are important for many applications, such as cryptography, online gambling, randomized algorithms and panic attempts at solutions in the last few seconds of programming competitions. Recently, one of the best monkeys has had to retire. However, before he left, he invented a new, cheaper way to generate randomness compared to directly using the randomness generated by coin-tossing monkeys. The method starts by taking an undirected graph with 2n nodes labelled 0, 1, …, 2n - 1. To generate k random n-bit numbers, they will let the monkeys toss n coins to decide where on the graph to start. This node number is the first number output. The monkeys will then pick a random edge from this node, and jump to the node that this edge connects to. This new node will be the second random number output. They will then select a random edge from this node (possibly back to the node they arrived from in the last step), follow it and output the number of the node they landed on. This walk will continue until k numbers have been output. During experiments, the ACM has noticed that different graphs give different output distributions, some of them not very random. So, they have asked for your help testing the graphs to see if the randomness is of good enough quality to sell. They consider a graph good if, for each of the n bits in each of the k numbers generated, the probability that this bit is output as 1 is greater than 25% and smaller than 75%. Input The input will consist of several data sets. Each set will start with a line consisting of three numbers k, n, e separated by single spaces, where k is the number of n-bit numbers to be generated and e is the number of edges in the graph (1 ≤ k ≤ 100, 1 ≤ n ≤ 10 and 1 ≤ e ≤ 2000). The next e lines will consist of two space-separated integers v1, v2 where 0 ≤ v1, v2 < 2n and v1 ≠ v2. Edges are undirected and each node is guaranteed to have at least one edge. There may be multiple edges between the same pair of nodes. The last test case will be followed by a line with k = n = e = 0, which should not be processed. Output For each input case, output a single line consisting of the word Yes if the graph is good, and No otherwise. Sample Input 10 2 3 0 3 1 3 2 3 5 2 4 0 1 0 3 1 2 2 3 0 0 0 Sample Output No Yes
C#MD5加密得到的结果有差异,为啥会首位少0?这是怎么回事?
方法一和方法二比较,方法二为啥会首位少0?求解答??? //MD5加密: //jQuery加密 //$.md5("FA9D4191BEC8A5CE6DE0B9EB8BDA57B3HSNF") //得到的是“0f604985adc25c5683315d35f6c15a7c” //方法一得到的是 //“0F604985ADC25C5683315D35F6C15A7C” string _str1 = System.Web.Security.FormsAuthentication.HashPasswordForStoringInConfigFile("FA9D4191BEC8A5CE6DE0B9EB8BDA57B3HSNF", "MD5"); //方法二得到的是 //“F604985ADC25C5683315D35F6C15A7C” string _str2 = ""; System.Security.Cryptography.MD5 md5 = System.Security.Cryptography.MD5.Create(); byte[] s = md5.ComputeHash(System.Text.Encoding.UTF8.GetBytes("FA9D4191BEC8A5CE6DE0B9EB8BDA57B3HSNF")); // 通过使用循环,将字节类型的数组转换为字符串,此字符串是常规字符格式化所得 for (int i = 0; i < s.Length; i++) { // 将得到的字符串使用十六进制类型格式。格式后的字符是小写的字母,如果使用大写(X)则格式后的字符是大写字符 _str2 += s[i].ToString("X"); }
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观...
MyBatis研习录(01)——MyBatis概述与入门
MyBatis 是一款优秀的持久层框架,它支持定制化 SQL、存储过程以及高级映射。MyBatis原本是apache的一个开源项目iBatis, 2010年该项目由apache software foundation 迁移到了google code并改名为MyBatis 。2013年11月MyBatis又迁移到Github。
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip installselenium -ihttps://pypi.tuna.tsinghua.edu.cn/simple/ ...
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca...
Python爬虫精简步骤1 获取数据
爬虫,从本质上来说,就是利用程序在网上拿到对我们有价值的数据。 爬虫能做很多事,能做商业分析,也能做生活助手,比如:分析北京近两年二手房成交均价是多少?广州的Python工程师平均薪资是多少?北京哪家餐厅粤菜最好吃?等等。 这是个人利用爬虫所做到的事情,而公司,同样可以利用爬虫来实现巨大的商业价值。比如你所熟悉的搜索引擎——百度和谷歌,它们的核心技术之一也是爬虫,而且是超级爬虫。 从搜索巨头到人工...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东...
破14亿,Python分析我国存在哪些人口危机!
一、背景 二、爬取数据 三、数据分析 1、总人口 2、男女人口比例 3、人口城镇化 4、人口增长率 5、人口老化(抚养比) 6、各省人口 7、世界人口 四、遇到的问题 遇到的问题 1、数据分页,需要获取从1949-2018年数据,观察到有近20年参数:LAST20,由此推测获取近70年的参数可设置为:LAST70 2、2019年数据没有放上去,可以手动添加上去 3、将数据进行 行列转换 4、列名...
web前端javascript+jquery知识点总结
1.Javascript 语法.用途 javascript 在前端网页中占有非常重要的地位,可以用于验证表单,制作特效等功能,它是一种描述语言,也是一种基于对象(Object)和事件驱动并具有安全性的脚本语言 ...
Python实战:抓肺炎疫情实时数据,画2019-nCoV疫情地图
今天,群里白垩老师问如何用python画武汉肺炎疫情地图。白垩老师是研究海洋生态与地球生物的学者,国家重点实验室成员,于不惑之年学习python,实为我等学习楷模。先前我并没有关注武汉肺炎的具体数据,也没有画过类似的数据分布图。于是就拿了两个小时,专门研究了一下,遂成此文。
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o...
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ...... ...
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!...
渗透测试-灰鸽子远控木马
木马概述 灰鸽子( Huigezi),原本该软件适用于公司和家庭管理,其功能十分强大,不但能监视摄像头、键盘记录、监控桌面、文件操作等。还提供了黑客专用功能,如:伪装系统图标、随意更换启动项名称和表述、随意更换端口、运行后自删除、毫无提示安装等,并采用反弹链接这种缺陷设计,使得使用者拥有最高权限,一经破解即无法控制。最终导致被黑客恶意使用。原作者的灰鸽子被定义为是一款集多种控制方式于一体的木马程序...
Python:爬取疫情每日数据
前言 目前每天各大平台,如腾讯、今日头条都会更新疫情每日数据,他们的数据源都是一样的,主要都是通过各地的卫健委官网通报。 以全国、湖北和上海为例,分别为以下三个网站: 国家卫健委官网:http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml 湖北卫健委官网:http://wjw.hubei.gov.cn/bmdt/ztzl/fkxxgzbdgrfyyq/xxfb...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名...
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计...
Python新型冠状病毒疫情数据自动爬取+统计+发送报告+数据屏幕(三)发送篇
今天介绍的项目是使用 Itchat 发送统计报告 项目功能设计: 定时爬取疫情数据存入Mysql 进行数据分析制作疫情报告 使用itchat给亲人朋友发送分析报告 基于Django做数据屏幕 使用Tableau做数据分析 来看看最终效果 目前已经完成,预计2月12日前更新 使用 itchat 发送数据统计报告 itchat 是一个基于 web微信的一个框架,但微信官方并不允许使用这...
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧???? 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升自...
粒子群算法求解物流配送路线问题(python)
1.Matlab实现粒子群算法的程序代码:https://www.cnblogs.com/kexinxin/p/9858664.html matlab代码求解函数最优值:https://blog.csdn.net/zyqblog/article/details/80829043 讲解通俗易懂,有数学实例的博文:https://blog.csdn.net/daaikuaichuan/article/...
教你如何编写第一个简单的爬虫
很多人知道爬虫,也很想利用爬虫去爬取自己想要的数据,那么爬虫到底怎么用呢?今天就教大家编写一个简单的爬虫。 下面以爬取笔者的个人博客网站为例获取第一篇文章的标题名称,教大家学会一个简单的爬虫。 第一步:获取页面 #!/usr/bin/python # coding: utf-8 import requests #引入包requests link = "http://www.santostang....
前端JS初级面试题二 (。•ˇ‸ˇ•。)老铁们!快来瞧瞧自己都会了么
1. 传统事件绑定和符合W3C标准的事件绑定有什么区别? 传统事件绑定 &lt;div onclick=""&gt;123&lt;/div&gt; div1.onclick = function(){}; &lt;button onmouseover=""&gt;&lt;/button&gt; 注意: 如果给同一个元素绑定了两次或多次相同类型的事件,那么后面的绑定会覆盖前面的绑定 (不支持DOM事...
情人节来了,教你个用 Python 表白的技巧
作者:@明哥 公众号:Python编程时光 2020年,这个看起来如此浪漫的年份,你还是一个人吗? 难不成我还能是一条狗? 18年的时候,写过一篇介绍如何使用 Python 来表白的文章。 虽然创意和使用效果都不错,但有一缺点,这是那个exe文件,女神需要打开电脑,才有可能参与进来,进而被你成功"调戏”。 由于是很早期的文章了,应该有很多人没有看过。 没有看过的,你可以点击这里查看:用Pyt...
相关热词 c# 压缩图片好麻烦 c#计算数组中的平均值 c#获取路由参数 c#日期精确到分钟 c#自定义异常必须继承 c#查表并返回值 c# 动态 表达式树 c# 监控方法耗时 c# listbox c#chart显示滚动条
立即提问