python2.7.3安装libxml2,导入import lxml.html报错,求大神指教

系统是red hat ,自带的是2.6.6版本的python,但最近需要使用scrapy需要安装2.7.3版本的
,通过yum install 安装的libxml2,安装成功后import lxml没有报错,但import lxml.html
时就报错了,报错信息如下:

import lxml.html
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/site-packages/lxml/html/__init__.py", line 12, in
from lxml import etree
File "lxml.etree.pyx", line 89, in init lxml.etree (src/lxml/lxml.etree.c:140164)
TypeError: encode() argument 1 must be string without null bytes, not unicode

求各位python大神指导...

1个回答

看上去是安装的版本跟你的python版本不兼容

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
python2.7中lxml安装后无法导入etree求教
![图片说明](https://img-ask.csdn.net/upload/201601/09/1452322012_379033.png) 在lxml文件夹下etree是pyd后缀,是libxml2,libxslt这两个没安装好么
mac os下python导入libxml遇到的问题
我用pip安装了libxml2dom,但是import libxml2dom时提示: shandow@mac:~ > python Python 2.7.5 (default, Mar 9 2014, 22:15:05) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import libxml2dom Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/2.7/site-packages/libxml2dom/__init__.py", line 24, in <module> from libxml2dom.macrolib import * File "/Library/Python/2.7/site-packages/libxml2dom/macrolib/__init__.py", line 26, in <module> from libxml2dom.macrolib.macrolib import * File "/Library/Python/2.7/site-packages/libxml2dom/macrolib/macrolib.py", line 30, in <module> from libxmlmods import libxml2mod ImportError: No module named libxmlmods 按照错误提示,打开/Library/Python/2.7/site-packages/libxml2dom/macrolib/macrolib.py: 问题出在这里: # Try the conventional import first. try: import libxml2mod except ImportError: from libxmlmods import libxml2mod 首先导入libxml2mod,如果发生异常就从libxmlmods导入libxml2mod。我明明安装了libxml2mod,为什么导入不成功呢?接着尝试:sudo pip install libxmlmods,提示找不到。。。。。。求助大神帮忙解决
求问ubuntu 13.10下ns3.8安装遇到python2.7-config意外停止
运行./build.py时,报错说“对不起,应用程序i386-linux-gnu-python2.7-config意外停止” 所以就build failed,遇见错误的地方如下: # Build NS-3 Entering directory `./ns-3.8' Note: configuring ns-3 without NSC => python waf configure --with-regression-traces ../ns-3.8-ref-traces --with-pybindgen ../pybindgen-0.14.1 Checking for program gcc or cc : /usr/bin/gcc Checking for program cpp : /usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /usr/bin/ranlib Checking for gcc : ok Checking for program g++ or c++ : /usr/bin/g++ Checking for program ar : /usr/bin/ar Checking for program ranlib : /usr/bin/ranlib Checking for g++ : ok Checking for program pkg-config : /usr/bin/pkg-config Checking for regression traces location : ok ../ns-3.8-ref-traces (given) Checking for -Wl,--soname=foo support : yes Checking for header stdlib.h : yes Checking for header signal.h : yes Checking for header pthread.h : yes Checking for high precision time implementation : 128-bit integer Checking for header stdint.h : yes Checking for header inttypes.h : yes Checking for header sys/inttypes.h : not found Checking for library rt : yes Checking for header netpacket/packet.h : yes Checking for header linux/if_tun.h : yes Checking for pkg-config flags for GTK_CONFIG_STORE : ok Checking for pkg-config flags for LIBXML2 : ok Checking for library sqlite3 : yes Checking for NSC location : not found Checking for header sys/socket.h : yes Checking for header netinet/in.h : yes Checking for program python : /usr/bin/python Checking for Python version >= 2.3 : ok 2.7.5 Checking for library python2.7 : yes Checking for program python2.7-config : /usr/bin/python2.7-config File "/usr/bin/python2.7-config", line 5 echo "Usage: $0 --prefix|--exec-prefix|--includes|--libs|--cflags|--ldflags|--extension-suffix|--help|--configdir" ^ SyntaxError: invalid syntax Traceback (most recent call last): File "waf", line 158, in <module> Scripting.prepare(t, cwd, VERSION, wafdir) File "/home/daisy/workspace/eclipseWorkspace/ns3.8/ns-3.8/.waf-1.5.16-e6d03192b5ddfa5ef2c8d65308e48e42/wafadmin/Scripting.py", line 105, in prepare prepare_impl(t,cwd,ver,wafdir)
安装libxml的时候报错,郁闷&求解
今天尝试着安装一下lamp,结果编译libxml的时候报了个错, ./.libs/libxml2.so: undefined reference to `gzopen64' collect2: ld returned 1 exit status make[2]: *** [xmllint] 错误 1 make[2]: Leaving directory `/usr/local/src/libxml2-2.7.2' make[1]: *** [all-recursive] 错误 1 make[1]: Leaving directory `/usr/local/src/libxml2-2.7.2' make: *** [all] 错误 2 求解??以前也编过,没碰到这个问题啊。
为什么我用scrapy爬取谷歌应用市场却爬取不到内容?
我想用scrapy爬取谷歌应用市场,代码没有报错,但是却爬取不到内容,这是为什么? ``` # -*- coding: utf-8 -*- import scrapy # from scrapy.spiders import CrawlSpider, Rule # from scrapy.linkextractors import LinkExtractor from gp.items import GpItem # from html.parser import HTMLParser as SGMLParser import requests class GoogleSpider(scrapy.Spider): name = 'google' allowed_domains = ['https://play.google.com/'] start_urls = ['https://play.google.com/store/apps/'] ''' rules = [ Rule(LinkExtractor(allow=("https://play\.google\.com/store/apps/details",)), callback='parse_app', follow=True), ] ''' def parse(self, response): selector = scrapy.Selector(response) urls = selector.xpath('//a[@class="LkLjZd ScJHi U8Ww7d xjAeve nMZKrb id-track-click"]/@href').extract() link_flag = 0 links = [] for link in urls: links.append(link) for each in urls: yield scrapy.Request(links[link_flag], callback=self.parse_next, dont_filter=True) link_flag += 1 def parse_next(self, response): selector = scrapy.Selector(response) app_urls = selector.xpath('//div[@class="details"]/a[@class="title"]/@href').extract() print(app_urls) urls = [] for url in app_urls: url = "http://play.google.com" + url print(url) urls.append(url) link_flag = 0 for each in app_urls: yield scrapy.Request(urls[link_flag], callback=self.parse_app, dont_filter=True) link_flag += 1 def parse_app(self, response): item = GpItem() item['app_url'] = response.url item['app_name'] = response.xpath('//div[@itemprop="name"]').xpath('text()').extract() item['app_icon'] = response.xpath('//img[@itempro="image"]/@src') item['app_developer'] = response.xpath('//') print(response.text) yield item ``` terminal运行信息如下: ``` BettyMacbookPro-764:gp zhanjinyang$ scrapy crawl google 2019-11-12 08:46:45 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: gp) 2019-11-12 08:46:45 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.1, Python 3.7.1 (default, Dec 14 2018, 13:28:58) - [Clang 4.0.1 (tags/RELEASE_401/final)], pyOpenSSL 18.0.0 (OpenSSL 1.1.1a 20 Nov 2018), cryptography 2.4.2, Platform Darwin-18.5.0-x86_64-i386-64bit 2019-11-12 08:46:45 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'gp', 'NEWSPIDER_MODULE': 'gp.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['gp.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36'} 2019-11-12 08:46:45 [scrapy.extensions.telnet] INFO: Telnet Password: b2d7dedf1f4a91eb 2019-11-12 08:46:45 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.logstats.LogStats'] 2019-11-12 08:46:45 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2019-11-12 08:46:45 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-11-12 08:46:45 [scrapy.middleware] INFO: Enabled item pipelines: ['gp.pipelines.GpPipeline'] 2019-11-12 08:46:45 [scrapy.core.engine] INFO: Spider opened 2019-11-12 08:46:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-11-12 08:46:45 [py.warnings] WARNING: /anaconda3/lib/python3.7/site-packages/scrapy/spidermiddlewares/offsite.py:61: URLWarning: allowed_domains accepts only domains, not URLs. Ignoring URL entry https://play.google.com/ in allowed_domains. warnings.warn(message, URLWarning) 2019-11-12 08:46:45 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2019-11-12 08:46:45 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://play.google.com/robots.txt> (referer: None) 2019-11-12 08:46:46 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://play.google.com/store/apps/> (referer: None) 2019-11-12 08:46:46 [scrapy.core.engine] INFO: Closing spider (finished) 2019-11-12 08:46:46 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 810, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 232419, 'downloader/response_count': 2, 'downloader/response_status_count/200': 2, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 11, 12, 8, 46, 46, 474543), 'log_count/DEBUG': 2, 'log_count/INFO': 9, 'log_count/WARNING': 1, 'memusage/max': 58175488, 'memusage/startup': 58175488, 'response_received_count': 2, 'robotstxt/request_count': 1, 'robotstxt/response_count': 1, 'robotstxt/response_status_count/200': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2019, 11, 12, 8, 46, 45, 562775)} 2019-11-12 08:46:46 [scrapy.core.engine] INFO: Spider closed (finished) ``` 求助!!!
请问scrapy为什么会爬取失败
C:\Users\Administrator\Desktop\新建文件夹\xiaozhu>python -m scrapy crawl xiaozhu 2019-10-26 11:43:11 [scrapy.utils.log] INFO: Scrapy 1.7.3 started (bot: xiaozhu) 2019-10-26 11:43:11 [scrapy.utils.log] INFO: Versions: lxml 4.4.1.0, libxml2 2.9 .5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.7.0, Python 3.5.3 (v 3.5.3:1880cb95a742, Jan 16 2017, 15:51:26) [MSC v.1900 32 bit (Intel)], pyOpenSS L 19.0.0 (OpenSSL 1.1.1c 28 May 2019), cryptography 2.7, Platform Windows-7-6.1 .7601-SP1 2019-10-26 11:43:11 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'xi aozhu', 'SPIDER_MODULES': ['xiaozhu.spiders'], 'NEWSPIDER_MODULE': 'xiaozhu.spid ers'} 2019-10-26 11:43:11 [scrapy.extensions.telnet] INFO: Telnet Password: c61bda45d6 3b8138 2019-10-26 11:43:11 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.logstats.LogStats'] 2019-10-26 11:43:12 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2019-10-26 11:43:12 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-10-26 11:43:12 [scrapy.middleware] INFO: Enabled item pipelines: [] 2019-10-26 11:43:12 [scrapy.core.engine] INFO: Spider opened 2019-10-26 11:43:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pag es/min), scraped 0 items (at 0 items/min) 2019-10-26 11:43:12 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2019-10-26 11:43:12 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting ( 307) to <GET https://bizverify.xiaozhu.com?slideRedirect=https%3A%2F%2Fbj.xiaozh u.com%2Ffangzi%2F125535477903.html> from <GET http://bj.xiaozhu.com/fangzi/12553 5477903.html> 2019-10-26 11:43:12 [scrapy.core.engine] DEBUG: Crawled (400) <GET https://bizve rify.xiaozhu.com?slideRedirect=https%3A%2F%2Fbj.xiaozhu.com%2Ffangzi%2F125535477 903.html> (referer: None) 2019-10-26 11:43:12 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <400 https://bizverify.xiaozhu.com?slideRedirect=https%3A%2F%2Fbj.xiaozhu.com%2 Ffangzi%2F125535477903.html>: HTTP status code is not handled or not allowed 2019-10-26 11:43:12 [scrapy.core.engine] INFO: Closing spider (finished) 2019-10-26 11:43:12 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 529, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 725, 'downloader/response_count': 2, 'downloader/response_status_count/307': 1, 'downloader/response_status_count/400': 1, 'elapsed_time_seconds': 0.427734, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 10, 26, 3, 43, 12, 889648), 'httperror/response_ignored_count': 1, 'httperror/response_ignored_status_count/400': 1, 'log_count/DEBUG': 2, 'log_count/INFO': 11, 'response_received_count': 1, 'scheduler/dequeued': 2, 'scheduler/dequeued/memory': 2, 'scheduler/enqueued': 2, 'scheduler/enqueued/memory': 2, 'start_time': datetime.datetime(2019, 10, 26, 3, 43, 12, 461914)} 2019-10-26 11:43:12 [scrapy.core.engine] INFO: Spider closed (finished)
一个百度拇指医生爬虫,想要先实现爬取某个问题的所有链接,但是爬不出来东西。求各位大神帮忙看一下这是为什么?
#写在前面的话 在这个爬虫里我想实现把百度拇指医生里关于“咳嗽”的链接全部爬取下来,下一步要进行的是把爬取到的每个链接里的items里面的内容爬取下来,但是我在第一步就卡住了,求各位大神帮我看一下吧。之前刚刚发了一篇问答,但是不知道怎么回事儿,现在找不到了,(貌似是被删了...?)救救小白吧!感激不尽! 这个是我的爬虫的结构 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574787999_274479.png) ##ks: ``` # -*- coding: utf-8 -*- import scrapy from kesou.items import KesouItem from scrapy.selector import Selector from scrapy.spiders import Spider from scrapy.http import Request ,FormRequest import pymongo class KsSpider(scrapy.Spider): name = 'ks' allowed_domains = ['kesou,baidu.com'] start_urls = ['https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%E5%92%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX%2B1M'] def parse(self, response): item = KesouItem() contents = response.xpath('.//h3[@class="t"]') for content in contents: url = content.xpath('.//a/@href').extract()[0] item['url'] = url yield item if self.offset < 760: self.offset += 10 yield scrapy.Request(url = "https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=" + str(self.offset) + "&oq=%E5%92%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX%2B1M",callback=self.parse,dont_filter=True) ``` ##items: ``` # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.html import scrapy class KesouItem(scrapy.Item): # 问题ID question_ID = scrapy.Field() # 问题描述 question = scrapy.Field() # 医生回答发表时间 answer_pubtime = scrapy.Field() # 问题详情 description = scrapy.Field() # 医生姓名 doctor_name = scrapy.Field() # 医生职位 doctor_title = scrapy.Field() # 医生所在医院 hospital = scrapy.Field() ``` ##middlewares: ``` # -*- coding: utf-8 -*- # Define here the models for your spider middleware # # See documentation in: # https://docs.scrapy.org/en/latest/topics/spider-middleware.html from scrapy import signals class KesouSpiderMiddleware(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the spider middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_spider_input(self, response, spider): # Called for each response that goes through the spider # middleware and into the spider. # Should return None or raise an exception. return None def process_spider_output(self, response, result, spider): # Called with the results returned from the Spider, after # it has processed the response. # Must return an iterable of Request, dict or Item objects. for i in result: yield i def process_spider_exception(self, response, exception, spider): # Called when a spider or process_spider_input() method # (from other spider middleware) raises an exception. # Should return either None or an iterable of Request, dict # or Item objects. pass def process_start_requests(self, start_requests, spider): # Called with the start requests of the spider, and works # similarly to the process_spider_output() method, except # that it doesn’t have a response associated. # Must return only requests (not items). for r in start_requests: yield r def spider_opened(self, spider): spider.logger.info('Spider opened: %s' % spider.name) class KesouDownloaderMiddleware(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the downloader middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_request(self, request, spider): # Called for each request that goes through the downloader # middleware. # Must either: # - return None: continue processing this request # - or return a Response object # - or return a Request object # - or raise IgnoreRequest: process_exception() methods of # installed downloader middleware will be called return None def process_response(self, request, response, spider): # Called with the response returned from the downloader. # Must either; # - return a Response object # - return a Request object # - or raise IgnoreRequest return response def process_exception(self, request, exception, spider): # Called when a download handler or a process_request() # (from other downloader middleware) raises an exception. # Must either: # - return None: continue processing this exception # - return a Response object: stops process_exception() chain # - return a Request object: stops process_exception() chain pass def spider_opened(self, spider): spider.logger.info('Spider opened: %s' % spider.name) ``` ##piplines: ``` # -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html import pymongo from scrapy.utils.project import get_project_settings settings = get_project_settings() class KesouPipeline(object): def __init__(self): host = settings["MONGODB_HOST"] port = settings["MONGODB_PORT"] dbname = settings["MONGODB_DBNAME"] sheetname= settings["MONGODB_SHEETNAME"] # 创建MONGODB数据库链接 client = pymongo.MongoClient(host = host, port = port) # 指定数据库 mydb = client[dbname] # 存放数据的数据库表名 self.sheet = mydb[sheetname] def process_item(self, item, spider): data = dict(item) self.sheet.insert(data) return item ``` ##settings: ``` # -*- coding: utf-8 -*- # Scrapy settings for kesou project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'kesou' SPIDER_MODULES = ['kesou.spiders'] NEWSPIDER_MODULE = 'kesou.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'kesou (+http://www.yourdomain.com)' # Obey robots.txt rules ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False USER_AGENT="Mozilla/5.0 (Windows NT 10.0; WOW64; rv:67.0) Gecko/20100101 Firefox/67.0" # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'kesou.middlewares.KesouSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'kesou.middlewares.KesouDownloaderMiddleware': 543, #} # Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'kesou.pipelines.KesouPipeline': 300, } # MONGODB 主机名 MONGODB_HOST = "127.0.0.1" # MONGODB 端口号 MONGODB_PORT = 27017 # 数据库名称 MONGODB_DBNAME = "ks" # 存放数据的表名称 MONGODB_SHEETNAME = "ks_urls" # Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' ``` ##run.py: ``` # -*- coding: utf-8 -*- from scrapy import cmdline cmdline.execute("scrapy crawl ks".split()) ``` ##这个是运行出来的结果: ``` PS D:\scrapy_project\kesou> scrapy crawl ks 2019-11-27 00:14:17 [scrapy.utils.log] INFO: Scrapy 1.7.3 started (bot: kesou) 2019-11-27 00:14:17 [scrapy.utils.log] INFO: Versions: lxml 4.3.2.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twis.7.0, Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryphy 2.6.1, Platform Windows-10-10.0.18362-SP0 2019-11-27 00:14:17 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'kesou', 'COOKIES_ENABLED': False, 'NEWSPIDER_MODULE': 'spiders', 'SPIDER_MODULES': ['kesou.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:67.0) Gecko/20100101 Firefox/67 2019-11-27 00:14:17 [scrapy.extensions.telnet] INFO: Telnet Password: 051629c46f34abdf 2019-11-27 00:14:17 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled item pipelines: ['kesou.pipelines.KesouPipeline'] 2019-11-27 00:14:19 [scrapy.core.engine] INFO: Spider opened 2019-11-27 00:14:19 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-11-27 00:14:19 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2019-11-27 00:14:20 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%E5%92%B3%E5&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX% (referer: None) 2019-11-27 00:14:20 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFFSYX%2B1M> (referer: None) Traceback (most recent call last): File "d:\anaconda3\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback yield next(it) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output for x in result: File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr> return (_set_referer(r) for r in result or ()) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr> return (r for r in result or () if _filter(r)) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr> return (r for r in result or () if _filter(r)) File "D:\scrapy_project\kesou\kesou\spiders\ks.py", line 19, in parse item['url'] = url File "d:\anaconda3\lib\site-packages\scrapy\item.py", line 73, in __setitem__ (self.__class__.__name__, key)) KeyError: 'KesouItem does not support field: url' 2019-11-27 00:14:20 [scrapy.core.engine] INFO: Closing spider (finished) 2019-11-27 00:14:20 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 438, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 68368, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.992207, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 11, 26, 16, 14, 20, 855804), 'log_count/DEBUG': 1, 2019-11-27 00:14:20 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 438, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 68368, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.992207, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 11, 26, 16, 14, 20, 855804), 'log_count/DEBUG': 1, 'log_count/ERROR': 1, 'log_count/INFO': 10, 'response_received_count': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/KeyError': 1, 'start_time': datetime.datetime(2019, 11, 26, 16, 14, 19, 863597)} 2019-11-27 00:14:21 [scrapy.core.engine] INFO: Spider closed (finished) ```
Ubuntu系统Pycharm安装Scrapy相关的包libxml2dom时报了个错SyntaxError: invalid syntax
![图片说明](https://img-ask.csdn.net/upload/201904/01/1554126877_641852.png) 就是这个 ``` Collecting libxml2dom Using cached https://files.pythonhosted.org/packages/03/13/835078254cffd5cf19cae3ca5782aae4120c86c888a0beb1c26390a5d6d6/libxml2dom-0.4.7.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pycharm-packaging/libxml2dom/setup.py", line 5, in <module> import libxml2dom File "/tmp/pycharm-packaging/libxml2dom/libxml2dom/__init__.py", line 151 raise KeyError, name ^ SyntaxError: invalid syntax ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pycharm-packaging/libxml2dom/ ``` 最好是能帮我分析一下为什么会这样?谢谢大神们了 ![图片说明](https://img-ask.csdn.net/upload/201904/01/1554126587_437168.png) 其他的包都正常安装,就这个包没看懂怎么回事求大神解答!! ![图片说明](https://img-ask.csdn.net/upload/201904/01/1554126653_694122.png)
在Ubuntu下装ns-3,编译时总是中途出错,具体如下:
Build failed -> task failed (exit status 1): {task 139621304904464: cxx print-introspected-doxygen.cc -> print-introspected-doxygen.cc.4.o} ['/usr/bin/g++', '-O0', '-ggdb', '-g3', '-Wall', '-Werror', '-Wno-error=deprecated-declarations', '-fstrict-aliasing', '-Wstrict-aliasing', '-pthread', '-pthread', '-fno-strict-aliasing', '-fwrapv', '-fdebug-prefix-map=/build/python2.7-3hk45v/python2.7-2.7.15~rc1=.', '-fstack-protector-strong', '-fno-strict-aliasing', '-Ibuild', '-I.', '-I.', '-I/home/zhangzq/tarballs/ns-allinone-3.13', '-I/usr/include/gtk-2.0', '-I/usr/lib/x86_64-linux-gnu/gtk-2.0/include', '-I/usr/include/gio-unix-2.0', '-I/usr/include/cairo', '-I/usr/include/pango-1.0', '-I/usr/include/atk-1.0', '-I/usr/include/pixman-1', '-I/usr/include/gdk-pixbuf-2.0', '-I/usr/include/libpng16', '-I/usr/include/harfbuzz', '-I/usr/include/glib-2.0', '-I/usr/lib/x86_64-linux-gnu/glib-2.0/include', '-I/usr/include/freetype2', '-I/usr/include/libxml2', '-I/usr/include/python2.7', '-I/usr/include/x86_64-linux-gnu/python2.7', '-DNS3_ASSERT_ENABLE', '-DNS3_LOG_ENABLE', '-DSQLITE3=1', '-DHAVE_IF_TUN_H=1', '-DPYTHONDIR="/usr/local/lib/python2.7/dist-packages"', '-DPYTHONARCHDIR="/usr/local/lib/python2.7/dist-packages"', '-DHAVE_PYTHON_H=1', '-DENABLE_GSL', '-DNDEBUG', '-D_FORTIFY_SOURCE=2', '../utils/print-introspected-doxygen.cc', '-c', '-o', 'utils/print-introspected-doxygen.cc.4.o'] Traceback (most recent call last): File "./build.py", line 147, in <module> sys.exit(main(sys.argv)) File "./build.py", line 138, in main build_ns3(config, build_examples, build_tests, args, build_options) File "./build.py", line 61, in build_ns3 run_command([sys.executable, "waf", "build"] + build_options) File "/home/zhangzq/tarballs/ns-allinone-3.13/util.py", line 24, in run_command raise CommandError("Command %r exited with code %i" % (argv, retval)) util.CommandError: Command ['/usr/bin/python', 'waf', 'build'] exited with code 1
quickfix-1.13.3linux下编译问题
/bin/bash ../libtool --tag=CXX --mode=link g++ -g -O2 -Wall -ansi -Wpointer-arith -Wwrite-strings -I/usr/include/libxml2 -L../UnitTest++ -lUnitTest++ -o at at.o C++/libquickfix.la -lpthread -lxml2 libtool: link: g++ -g -O2 -Wall -ansi -Wpointer-arith -Wwrite-strings -I/usr/include/libxml2 -o .libs/at at.o -L../UnitTest++ -lUnitTest++ C++/.libs/libquickfix.so -lpthread -lxml2 at.o: In function boost::thread_exception::thread_exception(int, char const*)': /usr/include/boost/thread/exceptions.hpp:51: undefined reference to boost::system::system_category()' at.o: In function __static_initialization_and_destruction_0': /usr/include/boost/system/error_code.hpp:221: undefined reference to boost::system::generic_category()' /usr/include/boost/system/error_code.hpp:222: undefined reference to boost::system::generic_category()' /usr/include/boost/system/error_code.hpp:223: undefined reference to boost::system::system_category()' collect2: error: ld returned 1 exit status Makefile:440: recipe for target 'at' failed make[3]: *** [at] Error 1 make[3]: Leaving directory '/home/ktgs/quickfix-1.13.3/src' Makefile:495: recipe for target 'all-recursive' failed make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory '/home/ktgs/quickfix-1.13.3/src' Makefile:484: recipe for target 'all-recursive' failed make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory '/home/ktgs/quickfix-1.13.3' Makefile:393: recipe for target 'all' failed 哪位大神可以帮忙一下
redhat 升级glibc到2.29版本会对系统产生什么影响?
最近在迁移php的服务器,需要手动安装php编译安装所需的依赖包(运维不让用yum,贼恶心),在安装 libxml2-2.9.8-5.fc30.x86_64(安全部门要求的,要比libxml2-2.9.1-6.e17_2.3.x86_64这个版本高,说是有拒绝服务的风险,我也很无奈)的时候,报错了: ``` error: Failed dependencies: libc.so.6(GLIBC_2.28)(64bit) is needed by libxml2-2.9.8-5.fc30.x86_64 libm.so.6(GLIBC_2.29)(64bit) is needed by libxml2-2.9.8-5.fc30.x86_64 ``` * 问题一:如果我按照它的提示去升级glibc到2.29版本,会对系统产生什么影响吗?因为我查了一下,有些人说系统会起不来,建议不轻易去升级这个。 * 问题二:如果要升级glibc,那么该如何升级呢? * 问题三:或者下一个低一点版本的libxml2包,是不是不需要升级glibc了呢? 希望各位大神不吝赐教,再次谢过!
pydub无法打开wav文件
打算使用pydub批量处理录音文件格式,但是光是执行到打开录音文件就会报错。 代码: from pydub import AudioSegment sound = AudioSegment.from_file('D:\\wavdownload\\1b449bd73b866e73c997401c19462353.wav', format='wav') 报错: Traceback (most recent call last): File "D:/PycharmProjects/chaxunyemian/wavtomp.py", line 5, in <module> sound = AudioSegment.from_file('D:\\wavdownload\\1b449bd73b866e73c997401c19462353.wav', format='wav') File "D:\Anaconda3\envs\baidujiami\lib\site-packages\pydub\audio_segment.py", line 704, in from_file p.returncode, p_err)) **pydub.exceptions.CouldntDecodeError: Decoding failed. ffmpeg returned error code: 1** Output from ffmpeg/avlib: ffmpeg version 4.2.1 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 9.1.1 (GCC) 20190807 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 Guessed Channel Layout for Input Stream #0.0 : mono Input #0, wav, from 'D:\wavdownload\1b449bd73b866e73c997401c19462353.wav': Duration: 00:01:17.70, bitrate: 64 kb/s Stream #0:0: Audio: pcm_alaw ([6][0][0][0] / 0x0006), 8000 Hz, mono, s16, 64 kb/s Stream mapping: Stream #0:0 -> #0:0 (pcm_alaw (native) -> pcm_s8 (native)) Press [q] to stop, [?] for help [wav @ 000000000042b9c0] **pcm_s8 codec not supported in WAVE format Could not write header for output file #0 (incorrect codec parameters ?): Function not implemented Error initializing output stream 0:0 -- Conversion failed!** 是不是wav文件的编码有问题?我该如何解决 在cmd尝试用命令来转换格式是可以的: ffmpeg -i 1b449bd73b866e73c997401c19462353.wav d:\wavdownload\1b449bd73b866e73c997401c19462353.mp3 执行这条语句拿到了MP3文件。
64位windows下安装libxml2
安装scrapy需要libxml2库,从网上下了几个exe傻瓜安装版本的,可是只支持32位。所以 下载了一个64位的,如图。我该把这些文件复制到电脑的哪个文件夹哪?![图片说明](https://img-ask.csdn.net/upload/201503/20/1426813049_886520.png) 希望得到你们的帮助,谢谢。
centos7安装mysql workbench 依赖检测失败
[root@localhost tmp]# rpm -ivh mysql-workbench-gpl-5.2.47-1el6.i686.rpm 错误:依赖检测失败: libGL.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libX11.so.6 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libatk-1.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libatkmm-1.6.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libc.so.6 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libc.so.6(GLIBC_2.0) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libc.so.6(GLIBC_2.1) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libc.so.6(GLIBC_2.1.3) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libc.so.6(GLIBC_2.2) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libc.so.6(GLIBC_2.3) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libc.so.6(GLIBC_2.3.4) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libc.so.6(GLIBC_2.4) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libcairo.so.2 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libcairomm-1.0.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libcrypt.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libcrypto.so.10 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libdl.so.2 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libdl.so.2(GLIBC_2.0) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libdl.so.2(GLIBC_2.1) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libfontconfig.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libfreetype.so.6 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgcc_s.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgcc_s.so.1(GCC_3.0) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgcc_s.so.1(GLIBC_2.0) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgdk-x11-2.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgdk_pixbuf-2.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgdkmm-2.4.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgio-2.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgiomm-2.4.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libglib-2.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libglibmm-2.4.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgmodule-2.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgnome-keyring.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgobject-2.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgthread-2.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgtk-x11-2.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libgtkmm-2.4.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 liblua-5.1.so 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libm.so.6 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libm.so.6(GLIBC_2.0) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libm.so.6(GLIBC_2.1) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libmysqlclient_r.so.16 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libmysqlclient_r.so.16(libmysqlclient_16) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libnsl.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libpango-1.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libpangocairo-1.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libpangoft2-1.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libpangomm-1.4.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libpcre.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libpthread.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libpthread.so.0(GLIBC_2.0) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libpython2.6.so.1.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 librt.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libsigc-2.0.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libsqlite3.so.0 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libssl.so.10 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libstdc++.so.6 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libstdc++.so.6(CXXABI_1.3) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libstdc++.so.6(CXXABI_1.3.1) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libstdc++.so.6(GLIBCXX_3.4) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libstdc++.so.6(GLIBCXX_3.4.10) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libstdc++.so.6(GLIBCXX_3.4.11) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libstdc++.so.6(GLIBCXX_3.4.9) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libuuid.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libuuid.so.1(UUID_1.0) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libxml2.so.2 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libxml2.so.2(LIBXML2_2.4.30) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libxml2.so.2(LIBXML2_2.6.0) 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libz.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 libzip.so.1 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 pexpect 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 python-paramiko 被 mysql-workbench-gpl-5.2.47-1el6.i686 需要 请问这个怎麼解决阿。
求助,libxml2在arm平台上无法解析gb2312编码文件
原来在mips平台上,使用了libxml2库,在代码中可以调用库函数对gb2312编码格式的xml文件进行解析。现在,平台升级到arm,重新编译了libxml的动态库,重新编译了应用软件,但是,原来能解析gb2312编码文件的功能,现在不能用了,调用doc = xmlReadFile(xml_filename, "GB2312", XML_PARSE_NOBLANKS); 会报错:Remarks.xml:1: parser error : Unsupported encoding gb2312 在服务器上(redhat),编译测试程序运行,可以读取gb2312编码xml文件。但是放到arm上就报错。既然在服务器上能解析gb2312编码文件,说明xml库本身是支持的吧。 折腾两天,不知道怎么解决了,在这里请教一下大家,指条明路,感谢。
GEOquery数据包安装失败,求大神解救
各位大神,我租用的一个服务器,使用ananconda,在安装GEOquery包时出错,求大神支招(曾经安装好并使用过了的,但是重启后就失效了。) > source("http://bioconductor.org/biocLite.R") Bioconductor version 3.6 (BiocInstaller 1.28.0), ?biocLite for help A new version of Bioconductor is available after installing the most recent version of R; see http://bioconductor.org/install > biocLite('GEOquery') BioC_mirror: https://bioconductor.org Using Bioconductor 3.6 (BiocInstaller 1.28.0), R 3.4.1 (2017-06-30). Installing package(s) ‘GEOquery’ also installing the dependency ‘xml2’ trying URL 'https://cran.rstudio.com/src/contrib/xml2_1.2.0.tar.gz' Content type 'application/x-gzip' length 251614 bytes (245 KB) ================================================== downloaded 245 KB trying URL 'https://bioconductor.org/packages/3.6/bioc/src/contrib/GEOquery_2.46.15.tar.gz' Content type 'application/x-gzip' length 13717934 bytes (13.1 MB) ================================================== downloaded 13.1 MB * installing *source* package ‘xml2’ ... ** package ‘xml2’ successfully unpacked and MD5 sums checked Package libxml-2.0 was not found in the pkg-config search path. Perhaps you should add the directory containing `libxml-2.0.pc' to the PKG_CONFIG_PATH environment variable No package 'libxml-2.0' found Package libxml-2.0 was not found in the pkg-config search path. Perhaps you should add the directory containing `libxml-2.0.pc' to the PKG_CONFIG_PATH environment variable No package 'libxml-2.0' found Using PKG_CFLAGS= Using PKG_LIBS=-lxml2 ------------------------- ANTICONF ERROR --------------------------- Configuration failed because libxml-2.0 was not found. Try installing: * deb: libxml2-dev (Debian, Ubuntu, etc) * rpm: libxml2-devel (Fedora, CentOS, RHEL) * csw: libxml2_dev (Solaris) If libxml-2.0 is already installed, check that 'pkg-config' is in your PATH and PKG_CONFIG_PATH contains a libxml-2.0.pc file. If pkg-config is unavailable you can set INCLUDE_DIR and LIB_DIR manually via: R CMD INSTALL --configure-vars='INCLUDE_DIR=... LIB_DIR=...' -------------------------------------------------------------------- ERROR: configuration failed for package ‘xml2’ * removing ‘/home/u1366/R/x86_64-pc-linux-gnu-library/3.4/xml2’ ERROR: dependency ‘xml2’ is not available for package ‘GEOquery’ * removing ‘/home/u1366/R/x86_64-pc-linux-gnu-library/3.4/GEOquery’
collect2.exe: error: ld returned 1 exit status
最近qt把linux移植到windows,到这个错误捣鼓半天没搞出来,看过网上的解答也没有得到解决,看看各位大神有什么见解。 ps:程序并没有编译出来(.exe程序没有编译出来,所以不存在程序正在运行) ![图片说明](https://img-ask.csdn.net/upload/201809/11/1536665537_713032.png) 编译器:Qt 5.7.1 mingw32 .pro文件: QT += core gui sql network LIBS += -LF:\WORK\test\new\lib -lBaseLib -llibxml2.dll 其中libBaseLib.a是自己用qt mingw32编译的,里面用到了libxml2.a
php-5.3.6 交叉编译 发生错误configure: error: ZLIB extension requires zlib >= 1.0.9
安装zlib-1.2.7 ``` #cd /zlib-1.2.7 # CC=arm-linux-gcc ./configure --prefix=~/libz --enable-shared # make # make install ``` 然后编译php ``` # CC=arm-linux-gcc ./configure --host=arm-linux --prefix=/usr/local/php --enable-pdo --with-zlib --with-libxml --with-gd --with-freetype --with-jpeg --with-png --enable-mbstring --with-mysql=/usr/local/mysql/ --with-mysqli=/usr/local/mysql/bin/mysql_config --enable-gd-native-ttf --with-gettext=/usr/local/gettext/ --enable-magic-quotes--enable-sockets --with-zlib -dir=~/libz --without-iconv ``` 错误:checking if the location of ZLIB install directory is defined... ~/libz checking for gzgets in -lz... no configure: error: ZLIB extension requires zlib >= 1.0.9 ![图片说明](https://img-ask.csdn.net/upload/201904/05/1554399752_988406.png) 跪求解决办法!
ffmpeg实现web上视频转码为mp4格式用video标签却无法播放
1.问题描述:调用ffmpeg将avi格式视频转码->mp4格式,使用最简单的<video>标签进行播放只能播放声音,没有画面(黑屏)。而暴风影音等本地视频播放软件却能够正常播放转码后的视频。 2.环境:SSH框架准备实现视频上传后的自动转码并且能够网页内播放,火狐浏览器。 3.已知mp4格式分为两种,其中H264类型才能进行html播放,但按照转码命令说已经转码成:视频H264、音频aac格式了,但是仍然是黑屏(有声音)。 4.未知:(1)转码时显示的相关命令行具体意义。(2)网上所说用格式工厂进行转码即可,有没有能够通过代码调用的视频转码方法予以解决呢? 5.以下为问题截图: (1)视频转码Java代码: ``` List<String> convert = new ArrayList<String>(); convert.add(ffmpegPath); // 添加转换工具路径 convert.add("-i"); // 添加参数"-i",该参数指定要转换的文件 convert.add(sourceVideoPath); // 添加要转换格式的视频文件的路径 convert.add("-acodec"); convert.add("aac"); convert.add("-vcodec"); convert.add("libx264"); convert.add("-y"); convert.add(targetFolder+fileRealNameNoExtension+targetExtension); ``` (2)转码时的输出: ``` ffmpeg version N-93678-g4b7166c9d5 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 8.3.1 (GCC) 20190414 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt libavutil 56. 26.100 / 56. 26.100 libavcodec 58. 52.100 / 58. 52.100 libavformat 58. 27.103 / 58. 27.103 libavdevice 58. 7.100 / 58. 7.100 libavfilter 7. 50.100 / 7. 50.100 libswscale 5. 4.100 / 5. 4.100 libswresample 3. 4.100 / 3. 4.100 libpostproc 55. 4.100 / 55. 4.100 Input #0, avi, from 'E:\Test\projectVideos\temp\1557994804863.avi': Metadata: genre : Other track : 1 encoder : Lavf54.63.104 Duration: 00:00:16.80, start: 0.000000, bitrate: 2286 kb/s Stream #0:0: Video: h264 (Main) (H264 / 0x34363248), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 2151 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc Stream #0:1: Audio: mp3 (U[0][0][0] / 0x0055), 44100 Hz, mono, fltp, 128 kb/s Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264)) Stream #0:1 -> #0:1 (mp3 (mp3float) -> aac (native)) Press [q] to stop, [?] for help [libx264 @ 0000000002ebf4c0] using SAR=1/1 [libx264 @ 0000000002ebf4c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX [libx264 @ 0000000002ebf4c0] profile Progressive High, level 4.0, 4:2:0, 8-bit [libx264 @ 0000000002ebf4c0] 264 - core 157 r2970 5493be8 - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'E:\Test\projectVideos\encvideos\1557994804863.mp4': Metadata: genre : Other track : 1 encoder : Lavf58.27.103 Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=-1--1, 25 fps, 12800 tbn, 25 tbc Metadata: encoder : Lavc58.52.100 libx264 Side data: cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1 Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 69 kb/s Metadata: encoder : Lavc58.52.100 aac frame= 1 fps=0.0 q=0.0 size= 0kB time=00:00:00.06 bitrate= 5.5kbits/s speed=0.138x frame= 46 fps= 23 q=0.0 size= 0kB time=00:00:01.88 bitrate= 0.2kbits/s speed=0.936x frame= 64 fps= 25 q=28.0 size= 0kB time=00:00:02.57 bitrate= 0.1kbits/s speed=1.02x frame= 88 fps= 29 q=28.0 size= 0kB time=00:00:03.55 bitrate= 0.1kbits/s speed=1.17x frame= 108 fps= 31 q=28.0 size= 0kB time=00:00:04.34 bitrate= 0.1kbits/s speed=1.23x frame= 133 fps= 33 q=28.0 size= 0kB time=00:00:05.34 bitrate= 0.1kbits/s speed=1.32x frame= 155 fps= 34 q=28.0 size= 0kB time=00:00:06.22 bitrate= 0.1kbits/s speed=1.35x frame= 171 fps= 33 q=28.0 size= 0kB time=00:00:06.87 bitrate= 0.1kbits/s speed=1.33x frame= 186 fps= 33 q=28.0 size= 0kB time=00:00:07.47 bitrate= 0.1kbits/s speed=1.32x frame= 195 fps= 31 q=28.0 size= 0kB time=00:00:07.82 bitrate= 0.0kbits/s speed=1.25x frame= 206 fps= 30 q=28.0 size= 0kB time=00:00:08.26 bitrate= 0.0kbits/s speed=1.22x frame= 214 fps= 29 q=28.0 size= 0kB time=00:00:08.59 bitrate= 0.0kbits/s speed=1.17x frame= 223 fps= 28 q=28.0 size= 0kB time=00:00:08.96 bitrate= 0.0kbits/s speed=1.14x frame= 229 fps= 27 q=28.0 size= 0kB time=00:00:09.19 bitrate= 0.0kbits/s speed=1.09x frame= 232 fps= 25 q=28.0 size= 0kB time=00:00:09.28 bitrate= 0.0kbits/s speed=1.01x frame= 235 fps= 24 q=28.0 size= 256kB time=00:00:09.42 bitrate= 222.5kbits/s speed=0.965x frame= 240 fps= 23 q=28.0 size= 256kB time=00:00:09.63 bitrate= 217.7kbits/s speed=0.928x frame= 243 fps= 22 q=28.0 size= 256kB time=00:00:09.72 bitrate= 215.6kbits/s speed=0.894x frame= 247 fps= 21 q=28.0 size= 256kB time=00:00:09.89 bitrate= 212.1kbits/s speed=0.856x frame= 253 fps= 21 q=28.0 size= 512kB time=00:00:10.12 bitrate= 414.3kbits/s speed=0.832x frame= 258 fps= 20 q=28.0 size= 512kB time=00:00:10.33 bitrate= 406.0kbits/s speed=0.812x frame= 264 fps= 20 q=28.0 size= 512kB time=00:00:10.56 bitrate= 397.0kbits/s speed=0.791x frame= 268 fps= 19 q=28.0 size= 768kB time=00:00:10.72 bitrate= 586.5kbits/s speed=0.773x frame= 272 fps= 19 q=28.0 size= 768kB time=00:00:10.91 bitrate= 576.5kbits/s speed=0.753x frame= 278 fps= 19 q=28.0 size= 768kB time=00:00:11.14 bitrate= 564.5kbits/s speed=0.743x frame= 281 fps= 18 q=28.0 size= 768kB time=00:00:11.26 bitrate= 558.7kbits/s speed=0.725x frame= 286 fps= 18 q=28.0 size= 1024kB time=00:00:11.47 bitrate= 731.3kbits/s speed=0.715x frame= 291 fps= 17 q=28.0 size= 1024kB time=00:00:11.67 bitrate= 718.3kbits/s speed=0.701x frame= 296 fps= 17 q=25.0 size= 1024kB time=00:00:11.86 bitrate= 707.0kbits/s speed=0.687x frame= 303 fps= 17 q=28.0 size= 1280kB time=00:00:12.14 bitrate= 863.5kbits/s speed=0.681x frame= 308 fps= 17 q=28.0 size= 1280kB time=00:00:12.35 bitrate= 848.9kbits/s speed=0.67x frame= 312 fps= 16 q=28.0 size= 1280kB time=00:00:12.51 bitrate= 837.8kbits/s speed=0.661x frame= 319 fps= 16 q=28.0 size= 1280kB time=00:00:12.79 bitrate= 819.6kbits/s speed=0.658x frame= 341 fps= 17 q=28.0 size= 1280kB time=00:00:13.65 bitrate= 768.0kbits/s speed=0.684x frame= 362 fps= 18 q=28.0 size= 1280kB time=00:00:14.48 bitrate= 723.7kbits/s speed=0.708x frame= 391 fps= 19 q=28.0 size= 1280kB time=00:00:15.67 bitrate= 669.0kbits/s speed=0.746x frame= 419 fps= 19 q=28.0 size= 1280kB time=00:00:16.67 bitrate= 629.0kbits/s speed=0.774x frame= 419 fps= 19 q=-1.0 Lsize= 1538kB time=00:00:16.71 bitrate= 753.7kbits/s speed=0.753x video:1480kB audio:43kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.976535% [libx264 @ 0000000002ebf4c0] frame I:2 Avg QP:13.37 size: 32816 [libx264 @ 0000000002ebf4c0] frame P:143 Avg QP:14.70 size: 6977 [libx264 @ 0000000002ebf4c0] frame B:274 Avg QP:13.65 size: 1647 [libx264 @ 0000000002ebf4c0] consecutive B-frames: 9.8% 7.6% 4.3% 78.3% [libx264 @ 0000000002ebf4c0] mb I I16..4: 38.7% 53.1% 8.1% [libx264 @ 0000000002ebf4c0] mb P I16..4: 17.5% 18.5% 0.2% P16..4: 5.5% 0.4% 0.2% 0.0% 0.0% skip:57.7% [libx264 @ 0000000002ebf4c0] mb B I16..4: 2.0% 0.7% 0.0% B16..8: 3.7% 0.2% 0.0% direct: 2.9% skip:90.5% L0:49.2% L1:48.4% BI: 2.4% [libx264 @ 0000000002ebf4c0] 8x8 transform intra:48.1% inter:84.8% [libx264 @ 0000000002ebf4c0] coded y,uvDC,uvAC intra: 3.9% 31.3% 4.5% inter: 0.7% 5.4% 0.2% [libx264 @ 0000000002ebf4c0] i16 v,h,dc,p: 22% 68% 4% 6% [libx264 @ 0000000002ebf4c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 29% 27% 42% 0% 0% 0% 0% 0% 0% [libx264 @ 0000000002ebf4c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 28% 24% 2% 4% 3% 4% 2% 3% [libx264 @ 0000000002ebf4c0] i8c dc,h,v,p: 48% 39% 11% 2% [libx264 @ 0000000002ebf4c0] Weighted P-Frames: Y:23.1% UV:22.4% [libx264 @ 0000000002ebf4c0] ref P L0: 65.5% 9.2% 23.1% 1.9% 0.3% [libx264 @ 0000000002ebf4c0] ref B L0: 71.1% 28.4% 0.5% [libx264 @ 0000000002ebf4c0] ref B L1: 98.8% 1.2% [libx264 @ 0000000002ebf4c0] kb/s:723.03 [aac @ 0000000002dc0980] Qavg: 47784.133 生成mp4视频为:E:\Test\projectVideos\temp\1557994804863.mp4 ``` (3)播放时的截图 ![图片说明](https://img-ask.csdn.net/upload/201905/16/1557997780_707491.png) (4) 播放视频jsp代码(使用的是video-js,但测试时也试了一下不加视频插件直接<video>标签播放,结果一样) ``` <video id="playVideo" class="video-js vjs-default-skin" controls ="true" preload="auto" width="960" height="480" poster="/images/${VIDEO.vpicture}" data-setup='{}'> <source src="/videos/${VIDEO.vpath}" type='video/mp4' /> </video> ``` ("/videos"为虚拟路径 实为本地存储地址) 希望好心人能够予以慷慨解答!~ 补充:在转wmv格式到MP4格式时出现了 ``` ConverVideoTest说:传入工具类的源视频为:E:\Test\projectVideos\temp\1558578421815.wmv ----接收到文件(E:\Test\projectVideos\temp\1558578421815.wmv)需要转换------- ----开始转文件(E:\Test\projectVideos\temp\1558578421815.wmv)-------------------------- 源视频类型为:wmv 可以转换,统一转为mp4文件 调用了ffmpeg.exe工具 该文件夹存在。 ffmpeg输入的命令:E:\ffmpeg\bin\ffmpeg.exe-iE:\Test\projectVideos\temp\1558578421815.wmv-acodecaac-vcodeclibx264-yE:\Test\projectVideos\encvideos\1558578421815.mp4 ffmpeg version N-93678-g4b7166c9d5 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 8.3.1 (GCC) 20190414 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt libavutil 56. 26.100 / 56. 26.100 libavcodec 58. 52.100 / 58. 52.100 libavformat 58. 27.103 / 58. 27.103 libavdevice 58. 7.100 / 58. 7.100 libavfilter 7. 50.100 / 7. 50.100 libswscale 5. 4.100 / 5. 4.100 libswresample 3. 4.100 / 3. 4.100 libpostproc 55. 4.100 / 55. 4.100 Input #0, asf, from 'E:\Test\projectVideos\temp\1558578421815.wmv': Metadata: DeviceConformanceTemplate: M1 WMFSDKNeeded : 0.0.0.0000 WM/WMADRCPeakReference: 7851 WM/WMADRCPeakTarget: 7851 WM/WMADRCAverageReference: 1027 WM/WMADRCAverageTarget: 1027 WMFSDKVersion : 12.0.7601.17514 IsVBR : 0 Duration: 00:00:16.58, bitrate: 1969 kb/s Stream #0:0(chi): Audio: wmapro (b[1][0][0] / 0x0162), 48000 Hz, stereo, fltp, 256 kb/s Stream #0:1(chi): Video: vc1 (Advanced) (WVC1 / 0x31435657), yuv420p, 1352x696 [SAR 1:1 DAR 169:87], 3400 kb/s, 30 tbr, 1k tbn, 60 tbc Stream mapping: Stream #0:1 -> #0:0 (vc1 (native) -> h264 (libx264)) Stream #0:0 -> #0:1 (wmapro (native) -> aac (native)) Press [q] to stop, [?] for help [libx264 @ 0000000000531c80] using SAR=1/1 [libx264 @ 0000000000531c80] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX [libx264 @ 0000000000531c80] profile Progressive High, level 3.2, 4:2:0, 8-bit [libx264 @ 0000000000531c80] 264 - core 157 r2970 5493be8 - H.264/MPEG-4 AVC codec - Copyleft 2003-2019 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 frame= 11 fps=0.0 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s dup=1 drop=0 speed=N/A frame= 52 fps= 48 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s dup=1 drop=0 speed=N/A frame= 85 fps= 54 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s dup=1 drop=0 speed=N/A frame= 145 fps= 64 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s dup=47 drop=0 speed=N/A frame= 171 fps= 61 q=0.0 size= 0kB time=-577014:32:22.77 bitrate= -0.0kbits/s dup=55 drop=0 speed=N/A Too many packets buffered for output stream 0:0. [libx264 @ 0000000000531c80] frame I:2 Avg QP:15.28 size: 56950 [libx264 @ 0000000000531c80] frame P:34 Avg QP:14.18 size: 1252 [libx264 @ 0000000000531c80] frame B:93 Avg QP:15.53 size: 140 [libx264 @ 0000000000531c80] consecutive B-frames: 3.1% 1.6% 2.3% 93.0% [libx264 @ 0000000000531c80] mb I I16..4: 14.9% 79.5% 5.6% [libx264 @ 0000000000531c80] mb P I16..4: 0.7% 1.1% 0.0% P16..4: 4.1% 1.4% 1.3% 0.0% 0.0% skip:91.4% [libx264 @ 0000000000531c80] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 3.6% 0.0% 0.0% direct: 0.0% skip:96.4% L0:36.8% L1:63.2% BI: 0.0% [libx264 @ 0000000000531c80] 8x8 transform intra:74.4% inter:80.1% [libx264 @ 0000000000531c80] coded y,uvDC,uvAC intra: 38.4% 33.4% 20.3% inter: 0.5% 0.2% 0.0% [libx264 @ 0000000000531c80] i16 v,h,dc,p: 61% 25% 9% 6% [libx264 @ 0000000000531c80] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 32% 31% 21% 2% 3% 3% 3% 3% 3% [libx264 @ 0000000000531c80] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 27% 32% 14% 4% 5% 5% 5% 4% 6% [libx264 @ 0000000000531c80] i8c dc,h,v,p: 76% 13% 8% 4% [libx264 @ 0000000000531c80] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0000000000531c80] ref P L0: 85.1% 6.8% 6.5% 1.6% [libx264 @ 0000000000531c80] ref B L0: 48.1% 47.5% 4.4% [libx264 @ 0000000000531c80] ref B L1: 93.2% 6.8% [libx264 @ 0000000000531c80] kb/s:315.41 **Conversion failed!** 生成mp4视频为:E:\Test\projectVideos\temp1558578421815.mp4 ===========视频转码结束,开始截图================= 该文件夹存在。 截图命令:E:\ffmpeg\bin\ffmpeg.exe-ss00:00:01-iE:\Test\projectVideos\temp\1558578421815.wmv-y-fimage2-s154x90E:\Test\projectVideos\images\1558578421815.jpg ffmpeg version N-93678-g4b7166c9d5 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 8.3.1 (GCC) 20190414 configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt libavutil 56. 26.100 / 56. 26.100 libavcodec 58. 52.100 / 58. 52.100 libavformat 58. 27.103 / 58. 27.103 libavdevice 58. 7.100 / 58. 7.100 libavfilter 7. 50.100 / 7. 50.100 libswscale 5. 4.100 / 5. 4.100 libswresample 3. 4.100 / 3. 4.100 libpostproc 55. 4.100 / 55. 4.100 Input #0, asf, from 'E:\Test\projectVideos\temp\1558578421815.wmv': Metadata: DeviceConformanceTemplate: M1 WMFSDKNeeded : 0.0.0.0000 WM/WMADRCPeakReference: 7851 WM/WMADRCPeakTarget: 7851 WM/WMADRCAverageReference: 1027 WM/WMADRCAverageTarget: 1027 WMFSDKVersion : 12.0.7601.17514 IsVBR : 0 Duration: 00:00:16.58, bitrate: 1969 kb/s Stream #0:0(chi): Audio: wmapro (b[1][0][0] / 0x0162), 48000 Hz, stereo, fltp, 256 kb/s Stream #0:1(chi): Video: vc1 (Advanced) (WVC1 / 0x31435657), yuv420p, 1352x696 [SAR 1:1 DAR 169:87], 3400 kb/s, 30 tbr, 1k tbn, 60 tbc Stream mapping: Stream #0:1 -> #0:0 (vc1 (native) -> mjpeg (native)) Press [q] to stop, [?] for help [swscaler @ 0000000002ecd4c0] deprecated pixel format used, make sure you did set range correctly Output #0, image2, to 'E:\Test\projectVideos\images\1558578421815.jpg': Metadata: DeviceConformanceTemplate: M1 WMFSDKNeeded : 0.0.0.0000 WM/WMADRCPeakReference: 7851 WM/WMADRCPeakTarget: 7851 WM/WMADRCAverageReference: 1027 WM/WMADRCAverageTarget: 1027 WMFSDKVersion : 12.0.7601.17514 IsVBR : 0 encoder : Lavf58.27.103 Stream #0:0(chi): Video: mjpeg, yuvj420p(pc), 154x90 [SAR 2535:2233 DAR 169:87], q=2-31, 200 kb/s, 30 fps, 30 tbn, 30 tbc Metadata: encoder : Lavc58.52.100 mjpeg Side data: cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1 [image2 @ 0000000000628e00] Could not get frame filename number 2 from pattern 'E:\Test\projectVideos\images\1558578421815.jpg'. Use '-frames:v 1' for a single image, or '-update' option, or use a pattern such as %03d within the filename. av_interleaved_write_frame(): Invalid argument frame= 2 fps=0.0 q=1.6 size=N/A time=00:00:00.06 bitrate=N/A speed=0.102x frame= 2 fps=0.0 q=1.6 Lsize=N/A time=00:00:00.06 bitrate=N/A speed=0.097x video:3kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown Conversion failed! 截图进程结束 截图成功! ``` 即出现了Conversion failed!!!
相见恨晚的超实用网站
搞学习 知乎:www.zhihu.com 简答题:http://www.jiandati.com/ 网易公开课:https://open.163.com/ted/ 网易云课堂:https://study.163.com/ 中国大学MOOC:www.icourse163.org 网易云课堂:study.163.com 哔哩哔哩弹幕网:www.bilibili.com 我要自学网:www.51zxw
花了20分钟,给女朋友们写了一个web版群聊程序
参考博客 [1]https://www.byteslounge.com/tutorials/java-ee-html5-websocket-example
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
开源一个功能完整的SpringBoot项目框架
福利来了,给大家带来一个福利。 最近想了解一下有关Spring Boot的开源项目,看了很多开源的框架,大多是一些demo或者是一个未成形的项目,基本功能都不完整,尤其是用户权限和菜单方面几乎没有完整的。 想到我之前做的框架,里面通用模块有:用户模块,权限模块,菜单模块,功能模块也齐全了,每一个功能都是完整的。 打算把这个框架分享出来,供大家使用和学习。 为什么用框架? 框架可以学习整体
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
Python——画一棵漂亮的樱花树(不同种樱花+玫瑰+圣诞树喔)
最近翻到一篇知乎,上面有不少用Python(大多是turtle库)绘制的树图,感觉很漂亮,我整理了一下,挑了一些我觉得不错的代码分享给大家(这些我都测试过,确实可以生成喔~) one 樱花树 动态生成樱花 效果图(这个是动态的): 实现代码 import turtle as T import random import time # 画樱花的躯干(60,t) def Tree(branch
深深的码丨Java HashMap 透析
HashMap 相关概念 HashTab、HashMap、TreeMap 均以键值对像是存储或操作数据元素。HashTab继承自Dictionary,HashMap、TreeMap继承自AbstractMap,三者均实现Map接口 **HashTab:**同步哈希表,不支持null键或值,因为同步导致性能影响,很少被使用 **HashMap:**应用较多的非同步哈希表,支持null键或值,是键值对...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
JDK12 Collectors.teeing 你真的需要了解一下
前言 在 Java 12 里面有个非常好用但在官方 JEP 没有公布的功能,因为它只是 Collector 中的一个小改动,它的作用是 merge 两个 collector 的结果,这句话显得很抽象,老规矩,我们先来看个图(这真是一个不和谐的图????): 管道改造经常会用这个小东西,通常我们叫它「三通」,它的主要作用就是将 downstream1 和 downstre...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
相关热词 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片 c# 图片颜色调整 最快 c#多张图片上传 c#密封类与密封方法
立即提问