计算机组成原理的那些事儿

计算机组成原理(唐朔飞)中,关于原码,补码,都是给出了定义式的求法。。。原码的定义式求法中,并没有包括-1,想问:::-1的原码到底存不存在?图片

2个回答

位比如说是8位整数,那么-1的原码是10000001
最高位1,其它位和1的原码相同。

baidu_33836580
baidu_知道 回复Mr_Penguin: 看你图片,上边圈中的第二行:0 >= x > -128(即2^8) ,已经包含了-1。
3 年多之前 回复
qq_28284627
Mr_Penguin 你说的很有道理,这样的求法是之后总结出来的。。。书上给出的定义是我拍照的两个公式,我想问的是,为什么书上的公式中没有吧【—1】定义进去。。。
接近 4 年之前 回复

假设n=8。

图片,上边圈中的第二行:0 >= x > -128(即2^8) ,已经包含了-1。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
boostarp datatable的那些事儿
接触boostrap框架,但是很多不熟悉的东西。在用datatable的时候,有个检索的search,这个检索框输入p,u某些字母检索没任何反应。但是其他的都正常,在官网上面的没这种情况,感觉就很奇怪啦!有没有小伙伴也遇到类似的情况的?求解疑!
有关BlueScreen的那些事儿
小本本最近遇到局域网内客户机频繁蓝屏问题。阴差阳错的问题抛到了我这边(what?)男人不能怂就是干。 然后,本本在网上查了大部分的蓝屏原因解决方法,并找到了客户机本地的蓝屏日志文件内容。日志贴到后面好了,目前问题还没有解决,,先来记录一下过程。希望有大腿给点意见: 本本先是按着这个说明搞得: https://jingyan.baidu.com/article/b87fe19ea527c25218356801.html 然后未见异常,找了一些其他方法未果,就去找了下面的日志: Microsoft (R) Windows Debugger Version 6.12.0002.633 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [F:\zooqun\error\Minidump2\062718-12453-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: *** Invalid *** **************************************************************************** * Symbol loading may be unreliable without a symbol search path. * * Use .symfix to have the debugger choose a symbol path. * * After setting your symbol path, use .reload to refresh symbol locations. * **************************************************************************** Executable search path is: ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* ****Unable to load image \SystemRoot\system32\ntoskrnl.exe, ****Win32 error 0n2 *** WARNING: Unable to verify timestamp for ntoskrnl.exe *** ERROR: Module load completed but symbols could not be loaded for ntoskrnl.exe Windows 7 Kernel Version 7601 (Service Pack 1) MP (2 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Built by: 7601.17514.amd64fre.win7sp1_rtm.101119-1850 Machine Name: Kernel base = 0xfffff800`03e0d000 PsLoadedModuleList = 0xfffff800`04052e90 Debug session time: Wed Jun 27 08:33:22.599 2018 (UTC + 8:00) System Uptime: 0 days 0:00:45.021 ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* Unable to load image \SystemRoot\system32\ntoskrnl.exe, Win32 error 0n2 *** WARNING: Unable to verify timestamp for ntoskrnl.exe *** ERROR: Module load completed but symbols could not be loaded for ntoskrnl.exe Loading Kernel Symbols ............................................................... ................................................................ ........... Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck D1, {0, 2, 0, fffff88001591401} ***** Kernel symbols are WRONG. Please fix symbols to do analysis.
Python全局环境下sklearn包中缺失Imputer函数
系统win10 64位,python版本3.7.4。 全局环境下,在我输入下载sklearn包的代码后,显示结果如下,包已经安装: ``` pip install sklearn Requirement already satisfied: sklearn in e:\python\lib\site-packages (0.0) Requirement already satisfied: scikit-learn in e:\python\lib\site-packages (from sklearn) (0.22) Requirement already satisfied: numpy>=1.11.0 in e:\python\lib\site-packages (from scikit-learn->sklearn) (1.17.4) Requirement already satisfied: scipy>=0.17.0 in e:\python\lib\site-packages (from scikit-learn->sklearn) (1.3.3) Requirement already satisfied: joblib>=0.11 in e:\python\lib\site-packages (from scikit-learn->sklearn) (0.14.1) ``` 然而在使用sklearn中的Imputer函数时,会出现报错: ``` >>> import numpy as np >>> import sklearn >>> from sklearn import preprocessing >>> from sklearn.preprocessing import Imputer Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'Imputer' from 'sklearn.preprocessing' (E:\python\lib\site-packages\sklearn\preprocessing\__init__.py) ``` 利用dir()查看包内的函数,发现没有Imputer: ``` >>> dir(sklearn.preprocessing) ['Binarizer', 'FunctionTransformer', 'KBinsDiscretizer', 'KernelCenterer', 'LabelBinarizer', 'LabelEncoder', 'MaxAbsScaler', 'MinMaxScaler', 'MultiLabelBinarizer', 'Normalizer', 'OneHotEncoder', 'OrdinalEncoder', 'PolynomialFeatures', 'PowerTransformer', 'QuantileTransformer', 'RobustScaler', 'StandardScaler', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_csr_polynomial_expansion', '_data', '_discretization', '_encoders', '_function_transformer', '_label', 'add_dummy_feature', 'binarize', 'label_binarize', 'maxabs_scale', 'minmax_scale', 'normalize', 'power_transform', 'quantile_transform', 'robust_scale', 'scale'] ``` 但是在conda的base环境下,陆续安装numpy、scipy、matplotlib后,安装 scikit-learn包,就可以使用这个函数了。 我寻找了很久的解决方案,网上说的路径和Imputer相同的情况没有发生,请问大佬们这究竟是怎么回事儿啊
用druid连接池连接不到数据库,配置文件都没问题,怎么回事儿?求助大佬!
![图片说明](https://img-ask.csdn.net/upload/201911/21/1574306321_773337.jpg)![图片说明](https://img-ask.csdn.net/upload/201911/21/1574306331_829124.jpg) ![图片说明](https://img-ask.csdn.net/upload/201911/21/1574300396_296307.jpg) ![图片说明](https://img-ask.csdn.net/upload/201911/21/1574300015_936456.jpg) ![图片说明](https://img-ask.csdn.net/upload/201911/21/1574300073_467134.jpg) 数据库路径没问题,HTML页面提交表单的action路径也没问题! [图片说明](https://img-ask.csdn.net/upload/201911/21/1574300276_94349.jpg)![图片说明](https://img-ask.csdn.net/upload/201911/21/1574300291_140945.jpg)![图片说明](https://img-ask.csdn.net/upload/201911/21/1574300299_55446.jpg)
求安利一些有意思有干货的页面美观的 国际IT技术论坛的网址
## # 原本不是问这个问题的。由于原问题没有一个很好的答案,我修改成为我平时想知道的小问题。 . # # 当然这个事儿我可以百度出来。我也试过了,找的好些需要翻墙。这里也是为了不浪费C币,就顺便问问。 . # # 目的是 想要个页面美观的国际论坛,让我有兴趣沉浸式学习,提高英文阅读力和技术. . # 拜托了!!!
zookeeper选举机制跟实际操作结论不一致
zookeeper选举机制网上说的如果myid依次为1,2,3 并且依次启动myid为1,2,3的服务端,应该是3是leader,其他是follower 但是我实际操作的结果是2是leader,1和3是flower! 请问这是怎么回事儿?是我操作问题,还是版本的算法问题,我是3.4.14
请问大神们手机相机前置摄像头能用后置不能用是怎么回事儿呢?有指教的吗?急求?
请问大神们手机相机前置摄像头能用后置不能用是怎么回事儿呢?有指教的吗?急求?
STM32F103 设置按键中断后,同时两个中断,低优先级的 ISR 在高优先级 ISR 执行后没有执行是怎么回事儿?
- __实验条件__ - GROUPPRI = 0x05 - KEY0, KEY1, KEY2 优先级分别设置为 (1, 2)(1, 0)(1, 1) - __ISR 逻辑__: ```c { print(KEYx ISR start); delay_ms(1000); delay_ms(1000); print(NVIC->ISPR); // 打印相关中断抢占优先级为 0 delay_ms(1000); print(KEYx ISR end); } ``` - __实验结果__ - 操作:按键顺序 KEY0, KEY1, KEY2 (按键操作在 2 秒内完成) - 结果:PEND值分别为(0,1,1),KEY0 ISR 执行完毕后,KEY1/KEY2 的 ISR 并未执行。 - __疑问:按照文档上的说明,优先级低的程序不应该在优先级高的 ISR 执行完毕后执行吗,为什么结果是低优先级的中断操作被丢掉了?__
idea 在return下出现红色波浪线,提示Return outsid method 百度了好长时间解决不了 求助!!!
![图片说明](https://img-ask.csdn.net/upload/201911/20/1574252329_135119.jpg) 这是怎么回事儿?代码也没问题,就是有红色下划线,运行提示错误, 求大佬解答
Tomcat配置成功后,第一次启动startup.bat失败,报错内容如下,这是怎么回事儿?
``` 23-Sep-2019 20:04:51.539 警告 [localhost-startStop-1] org.apache.tomcat.util.descriptor.DigesterFactory.locationFor The XML schema [web-app_3_1.xsd] could not be found. This is very likely to break XML validation if XML validation is enabled. 23-Sep-2019 20:04:51.540 警告 [localhost-startStop-1] org.apache.tomcat.util.descriptor.DigesterFactory.locationFor The XML schema [web-fragment_3_1.xsd] could not be found. This is very likely to break XML validation if XML validation is enabled. 23-Sep-2019 20:04:51.540 警告 [localhost-startStop-1] org.apache.tomcat.util.descriptor.DigesterFactory.locationFor The XML schema [web-common_3_1.xsd] could not be found. This is very likely to break XML validation if XML validation is enabled. 23-Sep-2019 20:04:51.541 警告 [localhost-startStop-1] org.apache.tomcat.util.descriptor.DigesterFactory.locationFor The XML schema [javaee_7.xsd] could not be found. This is very likely to break XML validation if XML validation is enabled. 23-Sep-2019 20:04:51.541 警告 [localhost-startStop-1] org.apache.tomcat.util.descriptor.DigesterFactory.locationFor The XML schema [jsp_2_3.xsd] could not be found. This is very likely to break XML validation if XML validation is enabled. 23-Sep-2019 20:04:51.541 警告 [localhost-startStop-1] org.apache.tomcat.util.descriptor.DigesterFactory.locationFor The XML schema [javaee_web_services_1_4.xsd] could not be found. This is very likely to break XML validation if XML validation is enabled. 23-Sep-2019 20:04:51.542 警告 [localhost-startStop-1] org.apache.tomcat.util.descriptor.DigesterFactory.locationFor The XML schema [javaee_web_services_client_1_4.xsd] could not be found. This is very likely to break XML validation if XML validation is enabled. 23-Sep-2019 20:04:51.593 严重 [localhost-startStop-1] org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/docs]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:743) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:714) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1125) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1859) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.catalina.LifecycleException: Failed to start component [Pipeline[StandardEngine[Catalina].StandardHost[localhost].StandardContext[/docs]]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5075) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 10 more Caused by: org.apache.catalina.LifecycleException: Failed to start component [org.apache.catalina.authenticator.NonLoginAuthenticator[/docs]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.StandardPipeline.startInternal(StandardPipeline.java:178) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 12 more Caused by: java.lang.NoSuchMethodError: javax.servlet.ServletContext.getVirtualServerName()Ljava/lang/String; at org.apache.catalina.authenticator.AuthenticatorBase.startInternal(AuthenticatorBase.java:1222) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 14 more 23-Sep-2019 20:04:51.594 严重 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [D:\apache-tomcat-8.5.46\webapps\docs] java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/docs]] at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:747) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:714) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1125) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1859) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 23-Sep-2019 20:04:51.599 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [D:\apache-tomcat-8.5.46\webapps\docs] has finished in [118] ms 23-Sep-2019 20:04:51.599 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory 把web 应用程序部署到目录 [D:\apache-tomcat-8.5.46\webapps\examples] 23-Sep-2019 20:04:51.727 严重 [localhost-startStop-1] org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/examples]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:743) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:714) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1125) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1859) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.catalina.LifecycleException: Failed to start component [Pipeline[StandardEngine[Catalina].StandardHost[localhost].StandardContext[/examples]]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5075) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 10 more Caused by: org.apache.catalina.LifecycleException: Failed to start component [org.apache.catalina.authenticator.FormAuthenticator[/examples]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.StandardPipeline.startInternal(StandardPipeline.java:178) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 12 more Caused by: java.lang.NoSuchMethodError: javax.servlet.ServletContext.getVirtualServerName()Ljava/lang/String; at org.apache.catalina.authenticator.AuthenticatorBase.startInternal(AuthenticatorBase.java:1222) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 14 more 23-Sep-2019 20:04:51.728 严重 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [D:\apache-tomcat-8.5.46\webapps\examples] java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/examples]] at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:747) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:714) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1125) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1859) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 23-Sep-2019 20:04:51.732 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [D:\apache-tomcat-8.5.46\webapps\examples] has finished in [132] ms 23-Sep-2019 20:04:51.732 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory 把web 应用程序部署到目录 [D:\apache-tomcat-8.5.46\webapps\host-manager] 23-Sep-2019 20:04:51.763 严重 [localhost-startStop-1] org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/host-manager]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:743) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:714) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1125) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1859) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.catalina.LifecycleException: Failed to start component [Pipeline[StandardEngine[Catalina].StandardHost[localhost].StandardContext[/host-manager]]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5075) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 10 more Caused by: org.apache.catalina.LifecycleException: Failed to start component [org.apache.catalina.authenticator.BasicAuthenticator[/host-manager]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.StandardPipeline.startInternal(StandardPipeline.java:178) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 12 more Caused by: java.lang.NoSuchMethodError: javax.servlet.ServletContext.getVirtualServerName()Ljava/lang/String; at org.apache.catalina.authenticator.AuthenticatorBase.startInternal(AuthenticatorBase.java:1222) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 14 more 23-Sep-2019 20:04:51.764 严重 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [D:\apache-tomcat-8.5.46\webapps\host-manager] java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/host-manager]] at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:747) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:714) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1125) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1859) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 23-Sep-2019 20:04:51.767 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [D:\apache-tomcat-8.5.46\webapps\host-manager] has finished in [35] ms 23-Sep-2019 20:04:51.768 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory 把web 应用程序部署到目录 [D:\apache-tomcat-8.5.46\webapps\manager] 23-Sep-2019 20:04:51.786 严重 [localhost-startStop-1] org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/manager]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:743) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:714) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1125) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1859) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.catalina.LifecycleException: Failed to start component [Pipeline[StandardEngine[Catalina].StandardHost[localhost].StandardContext[/manager]]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5075) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 10 more Caused by: org.apache.catalina.LifecycleException: Failed to start component [org.apache.catalina.authenticator.BasicAuthenticator[/manager]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.StandardPipeline.startInternal(StandardPipeline.java:178) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 12 more Caused by: java.lang.NoSuchMethodError: javax.servlet.ServletContext.getVirtualServerName()Ljava/lang/String; at org.apache.catalina.authenticator.AuthenticatorBase.startInternal(AuthenticatorBase.java:1222) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 14 more 23-Sep-2019 20:04:51.787 严重 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [D:\apache-tomcat-8.5.46\webapps\manager] java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/manager]] at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:747) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:714) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1125) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1859) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 23-Sep-2019 20:04:51.790 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [D:\apache-tomcat-8.5.46\webapps\manager] has finished in [23] ms 23-Sep-2019 20:04:51.790 信息 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory 把web 应用程序部署到目录 [D:\apache-tomcat-8.5.46\webapps\ROOT] 23-Sep-2019 20:04:51.806 严重 [localhost-startStop-1] org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:743) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:714) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1125) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1859) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.catalina.LifecycleException: Failed to start component [Pipeline[StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5075) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 10 more Caused by: org.apache.catalina.LifecycleException: Failed to start component [org.apache.catalina.authenticator.NonLoginAuthenticator[]] at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) at org.apache.catalina.core.StandardPipeline.startInternal(StandardPipeline.java:178) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 12 more Caused by: java.lang.NoSuchMethodError: javax.servlet.ServletContext.getVirtualServerName()Ljava/lang/String; at org.apache.catalina.authenticator.AuthenticatorBase.startInternal(AuthenticatorBase.java:1222) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ... 14 more 23-Sep-2019 20:04:51.818 严重 [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [D:\apache-tomcat-8.5.46\webapps\ROOT] java.lang.IllegalStateException: ContainerBase.addChild: start: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]] at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:747) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:719) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:714) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1125) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1859) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` 看到有解决方法修改jdk包中内容的尝试后仍无解
Android Studio的项目运行时报错:Error configuring CMake server
# 问题背景 我在跟着csdn上的一个教程尝试用Android Studio(3.4.1) 做一个变声器的app项目,fmod的so库都导入进项目的libs了 ,build.gradle和cmakelist.txt也都按照那个教程配置了。 [AS制作变声器](https://blog.csdn.net/a_thousand_miles/article/details/81150906 "") 后来报错说不支持armeabi,网上查了下发现NDK17以上都不 支持了,遂下载了ndk16b替换之前的ndk17,ndk的location 也都改过了,然后终于build成功,开心。 但是插上usb准备在手机上运行测试下的时候,出现这个报错 ,Google了也没解决掉。报错:Error configuring CMake server (E:\Android\Sdk\cmake\3.10.2.4988404\bin). ![图片说明](https://img-ask.csdn.net/upload/201909/29/1569748738_97421.png) ** 有大佬知道怎么回事儿吗?** ###代码(build.gradle) ``` apply plugin: 'com.android.application' android { compileSdkVersion 29 buildToolsVersion "29.0.2" defaultConfig { applicationId "com.example.qq_voicechanger01" minSdkVersion 15 targetSdkVersion 29 versionCode 1 versionName "1.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" externalNativeBuild { cmake { cppFlags "" abiFilters 'armeabi','x86' } } //编译平台 ndk{ abiFilters "armeabi","x86" } } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } } externalNativeBuild { cmake { path "src/main/cpp/CMakeLists.txt" version "3.10.2" } } //目录 sourceSets.main{ jniLibs.srcDirs = ['libs'] jni.srcDirs = [] } } dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'androidx.appcompat:appcompat:1.1.0' implementation 'androidx.constraintlayout:constraintlayout:1.1.3' testImplementation 'junit:junit:4.12' androidTestImplementation 'androidx.test:runner:1.2.0' androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0' } ``` ###代码(cmakelist.txt) ``` cmake_minimum_required(VERSION 3.4.1) find_library(log-lib log ) set(my_lib_path ${CMAKE_SOURECE_DIR}/libs) #添加第三方的so库 add_library( libfomd SHARED IMPORTED) #指明第三方so库的绝对路径 set_target_properties( libfmod PROPERTIES IMPORTED_LOCATION ${my_lib_path}/${ANDROID_ABI}/libfmod.so ) add_library( libfmodL SHARED IMPORTED ) set_target_properties( libfmodL PROPERTIES IMPORTED_LOCATION ${my_lib_path}/${ANDROID_ABI}/libfmodL.so ) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=gnu++11") #添加我们需要编译的cpp绝对路径 add_library( changeVoice SHARED src/main/cpp/play_sound.cpp src/main/cpp/common.cpp src/main/cpp/common_platform.cpp ) #导入路径,使编译时能找到这个文件夹 include_directories(src/main/cpp/inc) #连接好三个路径 target_link_libraries( changeVoice libfmod libfmodL ${log-lib} ) ```
cpu和频率内存大小的事儿!
很弱的cpu和一个8g的内存条 在一台电脑主板里面 cpu从内存中读取数据 经过算术逻辑单元的处理后 再写入内存中 然后内存在传输到输出单元 我想问的是 如果内存很大cpu处理不完里面的数据 电脑会怎么样 会卡死吗 还是??
一个百度拇指医生爬虫,想要先实现爬取某个问题的所有链接,但是爬不出来东西。求各位大神帮忙看一下这是为什么?
#写在前面的话 在这个爬虫里我想实现把百度拇指医生里关于“咳嗽”的链接全部爬取下来,下一步要进行的是把爬取到的每个链接里的items里面的内容爬取下来,但是我在第一步就卡住了,求各位大神帮我看一下吧。之前刚刚发了一篇问答,但是不知道怎么回事儿,现在找不到了,(貌似是被删了...?)救救小白吧!感激不尽! 这个是我的爬虫的结构 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574787999_274479.png) ##ks: ``` # -*- coding: utf-8 -*- import scrapy from kesou.items import KesouItem from scrapy.selector import Selector from scrapy.spiders import Spider from scrapy.http import Request ,FormRequest import pymongo class KsSpider(scrapy.Spider): name = 'ks' allowed_domains = ['kesou,baidu.com'] start_urls = ['https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%E5%92%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX%2B1M'] def parse(self, response): item = KesouItem() contents = response.xpath('.//h3[@class="t"]') for content in contents: url = content.xpath('.//a/@href').extract()[0] item['url'] = url yield item if self.offset < 760: self.offset += 10 yield scrapy.Request(url = "https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=" + str(self.offset) + "&oq=%E5%92%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX%2B1M",callback=self.parse,dont_filter=True) ``` ##items: ``` # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.html import scrapy class KesouItem(scrapy.Item): # 问题ID question_ID = scrapy.Field() # 问题描述 question = scrapy.Field() # 医生回答发表时间 answer_pubtime = scrapy.Field() # 问题详情 description = scrapy.Field() # 医生姓名 doctor_name = scrapy.Field() # 医生职位 doctor_title = scrapy.Field() # 医生所在医院 hospital = scrapy.Field() ``` ##middlewares: ``` # -*- coding: utf-8 -*- # Define here the models for your spider middleware # # See documentation in: # https://docs.scrapy.org/en/latest/topics/spider-middleware.html from scrapy import signals class KesouSpiderMiddleware(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the spider middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_spider_input(self, response, spider): # Called for each response that goes through the spider # middleware and into the spider. # Should return None or raise an exception. return None def process_spider_output(self, response, result, spider): # Called with the results returned from the Spider, after # it has processed the response. # Must return an iterable of Request, dict or Item objects. for i in result: yield i def process_spider_exception(self, response, exception, spider): # Called when a spider or process_spider_input() method # (from other spider middleware) raises an exception. # Should return either None or an iterable of Request, dict # or Item objects. pass def process_start_requests(self, start_requests, spider): # Called with the start requests of the spider, and works # similarly to the process_spider_output() method, except # that it doesn’t have a response associated. # Must return only requests (not items). for r in start_requests: yield r def spider_opened(self, spider): spider.logger.info('Spider opened: %s' % spider.name) class KesouDownloaderMiddleware(object): # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the downloader middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_request(self, request, spider): # Called for each request that goes through the downloader # middleware. # Must either: # - return None: continue processing this request # - or return a Response object # - or return a Request object # - or raise IgnoreRequest: process_exception() methods of # installed downloader middleware will be called return None def process_response(self, request, response, spider): # Called with the response returned from the downloader. # Must either; # - return a Response object # - return a Request object # - or raise IgnoreRequest return response def process_exception(self, request, exception, spider): # Called when a download handler or a process_request() # (from other downloader middleware) raises an exception. # Must either: # - return None: continue processing this exception # - return a Response object: stops process_exception() chain # - return a Request object: stops process_exception() chain pass def spider_opened(self, spider): spider.logger.info('Spider opened: %s' % spider.name) ``` ##piplines: ``` # -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html import pymongo from scrapy.utils.project import get_project_settings settings = get_project_settings() class KesouPipeline(object): def __init__(self): host = settings["MONGODB_HOST"] port = settings["MONGODB_PORT"] dbname = settings["MONGODB_DBNAME"] sheetname= settings["MONGODB_SHEETNAME"] # 创建MONGODB数据库链接 client = pymongo.MongoClient(host = host, port = port) # 指定数据库 mydb = client[dbname] # 存放数据的数据库表名 self.sheet = mydb[sheetname] def process_item(self, item, spider): data = dict(item) self.sheet.insert(data) return item ``` ##settings: ``` # -*- coding: utf-8 -*- # Scrapy settings for kesou project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'kesou' SPIDER_MODULES = ['kesou.spiders'] NEWSPIDER_MODULE = 'kesou.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'kesou (+http://www.yourdomain.com)' # Obey robots.txt rules ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False USER_AGENT="Mozilla/5.0 (Windows NT 10.0; WOW64; rv:67.0) Gecko/20100101 Firefox/67.0" # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'kesou.middlewares.KesouSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'kesou.middlewares.KesouDownloaderMiddleware': 543, #} # Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html ITEM_PIPELINES = { 'kesou.pipelines.KesouPipeline': 300, } # MONGODB 主机名 MONGODB_HOST = "127.0.0.1" # MONGODB 端口号 MONGODB_PORT = 27017 # 数据库名称 MONGODB_DBNAME = "ks" # 存放数据的表名称 MONGODB_SHEETNAME = "ks_urls" # Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage' ``` ##run.py: ``` # -*- coding: utf-8 -*- from scrapy import cmdline cmdline.execute("scrapy crawl ks".split()) ``` ##这个是运行出来的结果: ``` PS D:\scrapy_project\kesou> scrapy crawl ks 2019-11-27 00:14:17 [scrapy.utils.log] INFO: Scrapy 1.7.3 started (bot: kesou) 2019-11-27 00:14:17 [scrapy.utils.log] INFO: Versions: lxml 4.3.2.0, libxml2 2.9.9, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twis.7.0, Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b 26 Feb 2019), cryphy 2.6.1, Platform Windows-10-10.0.18362-SP0 2019-11-27 00:14:17 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'kesou', 'COOKIES_ENABLED': False, 'NEWSPIDER_MODULE': 'spiders', 'SPIDER_MODULES': ['kesou.spiders'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:67.0) Gecko/20100101 Firefox/67 2019-11-27 00:14:17 [scrapy.extensions.telnet] INFO: Telnet Password: 051629c46f34abdf 2019-11-27 00:14:17 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.logstats.LogStats'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-11-27 00:14:19 [scrapy.middleware] INFO: Enabled item pipelines: ['kesou.pipelines.KesouPipeline'] 2019-11-27 00:14:19 [scrapy.core.engine] INFO: Spider opened 2019-11-27 00:14:19 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-11-27 00:14:19 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2019-11-27 00:14:20 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%E5%92%B3%E5&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFXJvk%2FSYX% (referer: None) 2019-11-27 00:14:20 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.baidu.com/s?wd=%E5%92%B3%E5%97%BD&pn=0&oq=%B3%E5%97%BD&ct=2097152&ie=utf-8&si=muzhi.baidu.com&rsv_pq=980e0c55000e2402&rsv_t=ed3f0i5yeefxTMskgzim00cCUyVujMRnw0Vs4o1%2Bo%2Bohf9rFFSYX%2B1M> (referer: None) Traceback (most recent call last): File "d:\anaconda3\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback yield next(it) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output for x in result: File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr> return (_set_referer(r) for r in result or ()) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr> return (r for r in result or () if _filter(r)) File "d:\anaconda3\lib\site-packages\scrapy\core\spidermw.py", line 84, in evaluate_iterable for r in iterable: File "d:\anaconda3\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr> return (r for r in result or () if _filter(r)) File "D:\scrapy_project\kesou\kesou\spiders\ks.py", line 19, in parse item['url'] = url File "d:\anaconda3\lib\site-packages\scrapy\item.py", line 73, in __setitem__ (self.__class__.__name__, key)) KeyError: 'KesouItem does not support field: url' 2019-11-27 00:14:20 [scrapy.core.engine] INFO: Closing spider (finished) 2019-11-27 00:14:20 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 438, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 68368, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.992207, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 11, 26, 16, 14, 20, 855804), 'log_count/DEBUG': 1, 2019-11-27 00:14:20 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 438, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 68368, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'elapsed_time_seconds': 0.992207, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 11, 26, 16, 14, 20, 855804), 'log_count/DEBUG': 1, 'log_count/ERROR': 1, 'log_count/INFO': 10, 'response_received_count': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/KeyError': 1, 'start_time': datetime.datetime(2019, 11, 26, 16, 14, 19, 863597)} 2019-11-27 00:14:21 [scrapy.core.engine] INFO: Spider closed (finished) ```
一个查询语句在navicat中可以执行,但是在mybatis中报错了
#### 报错提示的语句和我写的mybatis中的语句不一样,不知道怎么个回事儿0.0。 **mybatis中sql语句:** ``` SELECT t.m_id as 'id', t.name, t.numbers, t.s_id as 'spId', t.learned, t.learning, t.plan_finish_date as 'planFinishDate', t.plan, t.next_plan as 'nextPlan' FROM( SELECT m.id as 'm_id', m.name, m.createdate, s.numbers, s.id as 's_id', s.learned, s.learning, s.plan_finish_date , s.plan, s.next_plan FROM member AS m LEFT JOIN study_plan AS s ON m.id = s.m_id ORDER BY s.numbers DESC LIMIT 10000 ) t GROUP BY t.name ORDER BY t.createdate ``` **我多粘贴了一些报错信息** ``` Error querying database. Cause: java.sql.SQLException: sql injection violation, syntax error: syntax error, error in :' ) tmp_count',expect RPAREN, actual EOF tmp_count : select count(1) from (SELECT t.m_id as 'id', t.name as 'name', t.numbers as 'numbers', t.sp_id as 'spId', t.learned as 'learned', t.learning as 'learning', t.plan_finish_date as 'planFinishDate', t.plan as 'plan', t.createdate as 'createdate', t.next_plan as 'nextPlan' FROM ( SELECT m.id as 'm_id', m.`name`, m.createdate, s.numbers, s.id as 'sp_id', s.learned, s.learning, s.plan_finish_date , s.plan, s.next_plan FROM member AS m LEFT JOIN study_plan AS s ON m.id = s.m_id ) tmp_count ### Cause: java.sql.SQLException: sql injection violation, syntax error: syntax error, error in :' ) tmp_count',expect RPAREN, actual EOF tmp_count : select count(1) from (SELECT t.m_id as 'id', t.name as 'name', t.numbers as 'numbers', t.sp_id as 'spId', t.learned as 'learned', t.learning as 'learning', t.plan_finish_date as 'planFinishDate', t.plan as 'plan', t.createdate as 'createdate', t.next_plan as 'nextPlan' FROM ( SELECT m.id as 'm_id', m.`name`, m.createdate, s.numbers, s.id as 'sp_id', s.learned, s.learning, s.plan_finish_date , s.plan, s.next_plan FROM member AS m LEFT JOIN study_plan AS s ON m.id = s.m_id ) tmp_count ; uncategorized SQLException for SQL []; SQL state [null]; error code [0]; sql injection violation, syntax error: syntax error, error in :' ) tmp_count',expect RPAREN, actual EOF tmp_count : select count(1) from (SELECT t.m_id as 'id', t.name as 'name', t.numbers as 'numbers', t.sp_id as 'spId', t.learned as 'learned', t.learning as 'learning', t.plan_finish_date as 'planFinishDate', t.plan as 'plan', t.createdate as 'createdate', t.next_plan as 'nextPlan' FROM ( SELECT m.id as 'm_id', m.`name`, m.createdate, s.numbers, s.id as 'sp_id', s.learned, s.learning, s.plan_finish_date , s.plan, s.next_plan FROM member AS m LEFT JOIN study_plan AS s ON m.id = s.m_id ) tmp_count; nested exception is java.sql.SQLException: sql injection violation, syntax error: syntax error, error in :' ) tmp_count',expect RPAREN, actual EOF tmp_count : select count(1) from (SELECT t.m_id as 'id', t.name as 'name', t.numbers as 'numbers', t.sp_id as 'spId', t.learned as 'learned', t.learning as 'learning', t.plan_finish_date as 'planFinishDate', t.plan as 'plan', t.createdate as 'createdate', t.next_plan as 'nextPlan' FROM ( SELECT m.id as 'm_id', m.`name`, m.createdate, s.numbers, s.id as 'sp_id', s.learned, s.learning, s.plan_finish_date , s.plan, s.next_plan FROM member AS m LEFT JOIN study_plan AS s ON m.id = s.m_id ) tmp_count ``` xml截个图 ![图片说明](https://img-ask.csdn.net/upload/201911/20/1574229873_476553.png)
PullToRefreshViewPager的clipChildren属性没有效果,怎么回事儿?
我想做一个能刷新的viewpage的卡片,但是现在可以刷新,却不能实现卡片效果,就是clipChildren属性不好使了
JAVA中Thread.yield方法的一个疑问,求大神帮忙解惑
感谢大神百忙之中抽空看到我的问题,我在想: 当线程A调用yield方法的时候,那么就相当于线程A给其他线程一个机会去让CPU重新选择一个线程执行 那么,我可不可以理解成: 假设我的CUP有N(N>3)个核,并且这N个核都空闲,这个时候我的线程数如果少于N,那么yield方法就没有任何意义,是这么回事儿吗?
api hook的底层实现是怎样的?
有谁知道hook底层的具体实现吗?进程地址空间是彼此独立的,那么hook技术怎么突破这种独立空间检测将一个进程的代码注入到另一个进程中的呢? 具体来说,比如安卓有沙箱机制,那么hook是怎么绕过沙箱将一个进程的代码注入到另一个进程中的? 请不要拿hook就像“钩子”之类的比喻来说事儿,然后上那张经典的钩子图,这种比喻让寻求本质的人感到迷茫。我想知道具体的底层实现,特别是怎么突破进程空间独立这一限制的。
tomcat8.exe及tomcat8w.exe无法启动,请问这是怎么回事儿?谢谢
tomcat8解压版,startup.bat可正常启动,eclipse内也可以正常启动 [2013-12-29 19:06:57] [error] ( javajni.c:863 ) [ 6896] FindClass org/apache/catalina/startup/Bootstrap failed [2013-12-29 19:06:57] [error] ( prunsrv.c:1183) [ 7236] Failed to start Java windows不能在本地计算机启动tomcat 错误代码4 不是端口问题、也不是msvcr71.dll文件问题。
对C++头文件改变如增加一个变量或者随便打sdflksdjf对整个项目都没反应是怎么回事儿
对C++头文件改变如增加一个变量或者随便打sdflksdjf对整个项目都没反应是怎么回事儿
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
阿里面试官问我:如何设计秒杀系统?我的回答让他比起大拇指
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图和个人联系方式,欢迎Star和指教 前言 Redis在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在Redis的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸,打败了...
C语言魔塔游戏
很早就很想写这个,今天终于写完了。 游戏截图: 编译环境: VS2017 游戏需要一些图片,如果有想要的或者对游戏有什么看法的可以加我的QQ 2985486630 讨论,如果暂时没有回应,可以在博客下方留言,到时候我会看到。 下面我来介绍一下游戏的主要功能和实现方式 首先是玩家的定义,使用结构体,这个名字是可以自己改变的 struct gamerole { char n...
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
Android性能优化(4):UI渲染机制以及优化
文章目录1. 渲染机制分析1.1 渲染机制1.2 卡顿现象1.3 内存抖动2. 渲染优化方式2.1 过度绘制优化2.1.1 Show GPU overdraw2.1.2 Profile GPU Rendering2.2 卡顿优化2.2.1 SysTrace2.2.2 TraceView 在从Android 6.0源码的角度剖析View的绘制原理一文中,我们了解到View的绘制流程有三个步骤,即m...
微服务中的Kafka与Micronaut
今天,我们将通过Apache Kafka主题构建一些彼此异步通信的微服务。我们使用Micronaut框架,它为与Kafka集成提供专门的库。让我们简要介绍一下示例系统的体系结构。我们有四个微型服务:订单服务,行程服务,司机服务和乘客服务。这些应用程序的实现非常简单。它们都有内存存储,并连接到同一个Kafka实例。 我们系统的主要目标是为客户安排行程。订单服务应用程序还充当网关。它接收来自客户的请求...
致 Python 初学者们!
作者| 许向武 责编 | 屠敏 出品 | CSDN 博客 前言 在 Python 进阶的过程中,相信很多同学应该大致上学习了很多 Python 的基础知识,也正在努力成长。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 Python 这门编程语言,从2009年开始单一使用 Python 应对所有的开发工作,直至今...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观...
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外...
字节跳动面试官这样问消息队列:分布式事务、重复消费、顺序消费,我整理了一下
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip installselenium -ihttps://pypi.tuna.tsinghua.edu.cn/simple/ ...
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca...
SpringBoot2.x系列教程(三十六)SpringBoot之Tomcat配置
Spring Boot默认内嵌的Tomcat为Servlet容器,关于Tomcat的所有属性都在ServerProperties配置类中。同时,也可以实现一些接口来自定义内嵌Servlet容器和内嵌Tomcat等的配置。 关于此配置,网络上有大量的资料,但都是基于SpringBoot1.5.x版本,并不适合当前最新版本。本文将带大家了解一下最新版本的使用。 ServerProperties的部分源...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东...
破14亿,Python分析我国存在哪些人口危机!
一、背景 二、爬取数据 三、数据分析 1、总人口 2、男女人口比例 3、人口城镇化 4、人口增长率 5、人口老化(抚养比) 6、各省人口 7、世界人口 四、遇到的问题 遇到的问题 1、数据分页,需要获取从1949-2018年数据,观察到有近20年参数:LAST20,由此推测获取近70年的参数可设置为:LAST70 2、2019年数据没有放上去,可以手动添加上去 3、将数据进行 行列转换 4、列名...
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o...
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ...... ...
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
2020年全新Java学习路线图,含配套视频,学完即为中级Java程序员!!
新的一年来临,突如其来的疫情打破了平静的生活! 在家的你是否很无聊,如果无聊就来学习吧! 世上只有一种投资只赚不赔,那就是学习!!! 传智播客于2020年升级了Java学习线路图,硬核升级,免费放送! 学完你就是中级程序员,能更快一步找到工作! 一、Java基础 JavaSE基础是Java中级程序员的起点,是帮助你从小白到懂得编程的必经之路。 在Java基础板块中有6个子模块的学...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
爬取薅羊毛网站百度云资源
这是疫情期间无聊做的爬虫, 去获取暂时用不上的教程 import threading import time import pandas as pd import requests import re from threading import Thread, Lock # import urllib.request as request # req=urllib.request.Requ...
如何优雅地打印一个Java对象?
你好呀,我是沉默王二,一个和黄家驹一样身高,和刘德华一样颜值的程序员。虽然已经写了十多年的 Java 代码,但仍然觉得自己是个菜鸟(请允许我惭愧一下)。 在一个月黑风高的夜晚,我思前想后,觉得再也不能这么蹉跎下去了。于是痛下决心,准备通过输出的方式倒逼输入,以此来修炼自己的内功,从而进阶成为一名真正意义上的大神。与此同时,希望这些文章能够帮助到更多的读者,让大家在学习的路上不再寂寞、空虚和冷。 ...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名...
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计...
HTML5适合的情人节礼物有纪念日期功能
前言 利用HTML5,css,js实现爱心树 以及 纪念日期的功能 网页有播放音乐功能 以及打字倾诉感情的画面,非常适合情人节送给女朋友 具体的HTML代码 具体只要修改代码里面的男某某和女某某 文字段也可自行修改,还有代码下半部分的JS代码需要修改一下起始日期 注意月份为0~11月 也就是月份需要减一。 当然只有一部分HTML和JS代码不够运行的,文章最下面还附加了完整代码的下载地址 &lt;!...
相关热词 c#导入fbx c#中屏蔽键盘某个键 c#正态概率密度 c#和数据库登陆界面设计 c# 高斯消去法 c# codedom c#读取cad文件文本 c# 控制全局鼠标移动 c# temp 目录 bytes初始化 c#
立即提问