python tornado4.0.2 异步问题

用tornado时,每个请求sleep5秒,异步的方式,但是从浏览器测试没有出现异步的效果,哪位大侠知道呢

class IndexHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
@tornado.gen.coroutine
def get(self):
print("begin")
#yield tornado.gen.Task(tornado.ioloop.IOLoop.instance().add_timeout, time.time() + 5)
yield gen.Task(IOLoop.instance().add_timeout, IOLoop.instance().time() + 5)
¦ print("after")
¦ greeting = self.get_argument('greeting', 'Hello')
¦ self.write(greeting + ', friendly user!')
¦ self.finish()

        if __name__ == "__main__":
                tornado.options.parse_command_line()
                app = tornado.web.Application(handlers=[(r"/", IndexHandler)])
                http_server = tornado.httpserver.HTTPServer(app)
                http_server.listen(options.port)
                tornado.ioloop.IOLoop.instance().start()

2个回答

浏览器是用的ajax异步方式请求的吗,不然Javascript就是串行化

开多个窗口可以看出效果,两次出结果中间间隔肯定少于5s
其实直接使用apache的ab 并发100看看结果就知道了

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
python tornado异步接口如何写?
为什么我跟别人写的一样,别人的视频里就有异步效果,而我的还是同步效果 ![图片说明](https://img-ask.csdn.net/upload/202002/06/1580996346_773904.png) 代码如下 ``` import tornado.web import time import tornado.gen import tornado.concurrent from concurrent.futures import ThreadPoolExecutor class IndexHandler(tornado.web.RequestHandler): executor=ThreadPoolExecutor(900) @tornado.concurrent.run_on_executor def db_querys(self): print("开始等待") time.sleep(10) print("等待结束") print("------------------------------------") self.write("ok") @tornado.gen.coroutine def get(self): print("开始执行了") yield self.db_querys() print("执行结束") ```
tornado web.py 重写tornado.web.RequestHandler 的构造函数
我想在url路由中传递参数,但是不会重写tornado.web.RequestHandler的构造函数,求大神指点。 怎么就收路由里面 AppHandler 传递的“abc” 感恩! # coding: utf-8 import datetime, sys, SocketServer,time import tornado.httpserver import tornado.ioloop from tornado.options import define, options import tornado.web import tornado.database import tornado.escape import urlparse import urllib import re reload (sys) sys.setdefaultencoding('utf-8') class Application(tornado.web.Application): def __init__(self): handlers = [ (r"/abc", AppHandler("abc")), ] settings = dict( debug = False, ) tornado.web.Application.__init__(self, handlers, **settings) class AppHandler(tornado.web.RequestHandler): def __init__(self, *args, **kwargs): tornado.web.RequestHandler.__init__( self, *args, **kwargs ) self.action=${接收“abc”} def post(self): try: self.today = datetime.datetime.today() self.did = self.get_argument("did", default = "") except: pass self.set_status(204) self.finish() def get(self): try: pass except Exception, e: raise else: pass finally: pass def main(argv): tornado.options.parse_command_line() http_server = tornado.httpserver.HTTPServer(Application(), xheaders=True) http_server.listen(int(argv[1])) tornado.ioloop.IOLoop.instance().start() print "start listening..." if __name__ == "__main__": main(sys.argv)
tkinter解释器引用WindRiver Tornado2.2的wtxtcl.dll失败
项目里需要在python下通过TCL解释器引用WindRiver的Tornado2.2安装目录下的wtxtcl.dll来实现对Tornado2.2 基于TCL的控制 python3.4代码如下: ``` import tkinter tcl = tkinter.Tcl() tcl.eval('load d:/applications/tornado2.2/host/x86-win32/bin/wtxtcl.dll wtxtcl') ``` 然后就变成了这样: ![python.exe停止运行](https://img-ask.csdn.net/upload/201511/06/1446776063_275000.png) 但是在tclsh下工作是正常的: ![tclsh正常](https://img-ask.csdn.net/upload/201511/06/1446776176_698787.jpg) tclsh的图中的wtxPath命令是为验证dll引用成功,因为wtxPath是定义在wtxtcl.dll中的命令 求大神帮助
如何在Tornado2.2中实现UDP广播
在Tornado2.2中实现UDP广播,用几个vxsim模拟实现可以么?
如何创建一个带有指定版本Python和JupyterLab的环境?
&emsp;&emsp;请问要如何在Anaconda中创建一个同时拥有Python 3.7.4与JupyterLab 1.1.4的环境? &emsp;&emsp;我在cmd中通过以下命令创建了一个Python 3.7.4环境: ```batch D:\Anaconda3\envs>conda create -n dp python=3.7 WARNING: A directory already exists at the target location 'D:\Anaconda3\envs\dp' but it is not a conda environment. Continue creating environment (y/[n])? y Collecting package metadata (current_repodata.json): done Solving environment: done ## Package Plan ## environment location: D:\Anaconda3\envs\dp added / updated specs: - python=3.7 The following NEW packages will be INSTALLED: ca-certificates anaconda/pkgs/main/win-64::ca-certificates-2019.8.28-0 certifi anaconda/pkgs/main/win-64::certifi-2019.9.11-py37_0 openssl anaconda/pkgs/main/win-64::openssl-1.1.1d-he774522_2 pip anaconda/pkgs/main/win-64::pip-19.2.3-py37_0 python anaconda/pkgs/main/win-64::python-3.7.4-h5263a28_0 setuptools anaconda/pkgs/main/win-64::setuptools-41.4.0-py37_0 sqlite anaconda/pkgs/main/win-64::sqlite-3.30.0-he774522_0 vc anaconda/pkgs/main/win-64::vc-14.1-h0510ff6_4 vs2015_runtime anaconda/pkgs/main/win-64::vs2015_runtime-14.16.27012-hf0eaf9b_0 wheel anaconda/pkgs/main/win-64::wheel-0.33.6-py37_0 wincertstore anaconda/pkgs/main/win-64::wincertstore-0.2-py37_0 ``` &emsp;&emsp;Python 3.7.4被成功安装,但当继续安装JupyterLab时,会出现以下问题: ```batch D:\Anaconda3\envs>activate dp D:\Anaconda3\envs>conda.bat activate dp (dp) D:\Anaconda3\envs>conda install jupyterlab Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: | Found conflicts! Looking for incompatible packages. ... UnsatisfiableError: The following specifications were found to be incompatible with each other: Package tornado conflicts for: jupyterlab -> tornado[version='!=6.0.0,!=6.0.1,!=6.0.2'] Package jinja2 conflicts for: jupyterlab -> jinja2[version='>=2.10'] Package notebook conflicts for: jupyterlab -> notebook[version='>=4.3|>=4.3.1'] Package nodejs conflicts for: jupyterlab -> nodejs[version='<10'] Package sqlite conflicts for: python=3.7 -> sqlite[version='>=3.25.3,<4.0a0|>=3.26.0,<4.0a0|>=3.27.2,<4.0a0|>=3.28.0,<4.0a0|>=3.29.0,<4.0a0'] Package openssl conflicts for: python=3.7 -> openssl[version='>=1.1.1a,<1.1.2a|>=1.1.1b,<1.1.2a|>=1.1.1c,<1.1.2a'] Package pip conflicts for: python=3.7 -> pip Package jupyterlab_launcher conflicts for: jupyterlab -> jupyterlab_launcher[version='>=0.10.0,<0.11.0|>=0.11.0,<0.12.0|>=0.11.2,<0.12.0|>=0.13.1,<0.14.0|>=0.4.0|>=0.6.0,<0.7.0'] Package subprocess32 conflicts for: jupyterlab -> subprocess32 Package futures conflicts for: jupyterlab -> futures Package jupyterlab_server conflicts for: jupyterlab -> jupyterlab_server[version='>=0.2.0,<0.3.0|>=1.0.0,<2.0.0'] Package vc conflicts for: python=3.7 -> vc[version='14.*|>=14.1,<15.0a0'] ``` &emsp;&emsp;这个是什么问题?如何解决?感觉默认的base环境很庞大,很多东西用不到。但想要使用JupyterLab又不得不安装……
tornado使用遇到的问题pycharm报错为NotImplementedError Python
简单到不能在简单的代码,但是这个错误真的看不懂,别人好像都没遇到这个问题啊,真萌新求助大佬 ![图片说明](https://img-ask.csdn.net/upload/202002/16/1581857987_236379.png)![图片说明](https://img-ask.csdn.net/upload/202002/16/1581858004_793028.png)
Tensorflow测试训练styleGAN时报错 No OpKernel was registered to support Op 'NcclAllReduce' with these attrs.
在测试官方StyleGAN。 运行官方与训练模型pretrained_example.py generate_figures.py 没有问题。GPU工作正常。 运行train.py时报错 尝试只用单个GPU训练时没有报错。 NcclAllReduce应该跟多GPU通信有关,不太了解。 InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] 经过多番google 尝试过 重启 conda install keras-gpu 重新安装tensorflow-gpu==1.10.0(跟官方版本保持一致) ``` …… Building TensorFlow graph... Setting up snapshot image grid... Setting up run dir... Training... Traceback (most recent call last): File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1278, in _do_call return fn(*args) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1263, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "train.py", line 191, in <module> main() File "train.py", line 186, in main dnnlib.submit_run(**kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 290, in submit_run run_wrapper(submit_config) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 242, in run_wrapper util.call_func_by_name(func_name=submit_config.run_func_name, submit_config=submit_config, **submit_config.run_func_kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\util.py", line 257, in call_func_by_name return func_obj(*args, **kwargs) File "E:\MachineLearning\stylegan-master\training\training_loop.py", line 230, in training_loop tflib.run([D_train_op, Gs_update_op], {lod_in: sched.lod, lrate_in: sched.D_lrate, minibatch_in: sched.minibatch}) File "E:\MachineLearning\stylegan-master\dnnlib\tflib\tfutil.py", line 26, in run return tf.get_default_session().run(*args, **kwargs) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 877, in run run_metadata_ptr) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1100, in _run feed_dict_tensor, options, run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1272, in _do_run run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1291, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] Caused by op 'TrainD/SumAcrossGPUs/NcclAllReduce', defined at: File "train.py", line 191, in <module> main() File "train.py", line 186, in main dnnlib.submit_run(**kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 290, in submit_run run_wrapper(submit_config) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 242, in run_wrapper util.call_func_by_name(func_name=submit_config.run_func_name, submit_config=submit_config, **submit_config.run_func_kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\util.py", line 257, in call_func_by_name return func_obj(*args, **kwargs) File "E:\MachineLearning\stylegan-master\training\training_loop.py", line 185, in training_loop D_train_op = D_opt.apply_updates() File "E:\MachineLearning\stylegan-master\dnnlib\tflib\optimizer.py", line 135, in apply_updates g = nccl_ops.all_sum(g) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\python\ops\nccl_ops.py", line 49, in all_sum return _apply_all_reduce('sum', tensors) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\python\ops\nccl_ops.py", line 230, in _apply_all_reduce shared_name=shared_name)) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\ops\gen_nccl_ops.py", line 59, in nccl_all_reduce num_devices=num_devices, shared_name=shared_name, name=name) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\util\deprecation.py", line 454, in new_func return func(*args, **kwargs) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\ops.py", line 3156, in create_op op_def=op_def) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\ops.py", line 1718, in __init__ self._traceback = tf_stack.extract_stack() InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] ``` ``` #conda list: # Name Version Build Channel _tflow_select 2.1.0 gpu absl-py 0.8.1 pypi_0 pypi alabaster 0.7.12 py36_0 asn1crypto 1.2.0 py36_0 astor 0.8.0 pypi_0 pypi astroid 2.3.2 py36_0 attrs 19.3.0 py_0 babel 2.7.0 py_0 backcall 0.1.0 py36_0 blas 1.0 mkl bleach 3.1.0 py36_0 ca-certificates 2019.10.16 0 certifi 2019.9.11 py36_0 cffi 1.13.1 py36h7a1dbc1_0 chardet 3.0.4 py36_1003 cloudpickle 1.2.2 py_0 colorama 0.4.1 py36_0 cryptography 2.8 py36h7a1dbc1_0 cudatoolkit 9.0 1 cudnn 7.6.4 cuda9.0_0 decorator 4.4.1 py_0 defusedxml 0.6.0 py_0 django 2.2.7 pypi_0 pypi docutils 0.15.2 py36_0 entrypoints 0.3 py36_0 gast 0.3.2 py_0 grpcio 1.25.0 pypi_0 pypi h5py 2.9.0 py36h5e291fa_0 hdf5 1.10.4 h7ebc959_0 icc_rt 2019.0.0 h0cc432a_1 icu 58.2 ha66f8fd_1 idna 2.8 pypi_0 pypi image 1.5.27 pypi_0 pypi imagesize 1.1.0 py36_0 importlib_metadata 0.23 py36_0 intel-openmp 2019.4 245 ipykernel 5.1.3 py36h39e3cac_0 ipython 7.9.0 py36h39e3cac_0 ipython_genutils 0.2.0 py36h3c5d0ee_0 isort 4.3.21 py36_0 jedi 0.15.1 py36_0 jinja2 2.10.3 py_0 jpeg 9b hb83a4c4_2 jsonschema 3.1.1 py36_0 jupyter_client 5.3.4 py36_0 jupyter_core 4.6.1 py36_0 keras-applications 1.0.8 py_0 keras-base 2.2.4 py36_0 keras-gpu 2.2.4 0 keras-preprocessing 1.1.0 py_1 keyring 18.0.0 py36_0 lazy-object-proxy 1.4.3 py36he774522_0 libpng 1.6.37 h2a8f88b_0 libprotobuf 3.9.2 h7bd577a_0 libsodium 1.0.16 h9d3ae62_0 markdown 3.1.1 py36_0 markupsafe 1.1.1 py36he774522_0 mccabe 0.6.1 py36_1 mistune 0.8.4 py36he774522_0 mkl 2019.4 245 mkl-service 2.3.0 py36hb782905_0 mkl_fft 1.0.15 py36h14836fe_0 mkl_random 1.1.0 py36h675688f_0 more-itertools 7.2.0 py36_0 nbconvert 5.6.1 py36_0 nbformat 4.4.0 py36h3a5bc1b_0 numpy 1.17.3 py36h4ceb530_0 numpy-base 1.17.3 py36hc3f5095_0 numpydoc 0.9.1 py_0 openssl 1.1.1d he774522_3 packaging 19.2 py_0 pandoc 2.2.3.2 0 pandocfilters 1.4.2 py36_1 parso 0.5.1 py_0 pickleshare 0.7.5 py36_0 pillow 6.2.1 pypi_0 pypi pip 19.3.1 py36_0 prompt_toolkit 2.0.10 py_0 protobuf 3.10.0 pypi_0 pypi psutil 5.6.3 py36he774522_0 pycodestyle 2.5.0 py36_0 pycparser 2.19 py36_0 pyflakes 2.1.1 py36_0 pygments 2.4.2 py_0 pylint 2.4.3 py36_0 pyopenssl 19.0.0 py36_0 pyparsing 2.4.2 py_0 pyqt 5.9.2 py36h6538335_2 pyreadline 2.1 py36_1 pyrsistent 0.15.4 py36he774522_0 pysocks 1.7.1 py36_0 python 3.6.9 h5500b2f_0 python-dateutil 2.8.1 py_0 pytz 2019.3 py_0 pywin32 223 py36hfa6e2cd_1 pyyaml 5.1.2 py36he774522_0 pyzmq 18.1.0 py36ha925a31_0 qt 5.9.7 vc14h73c81de_0 qtawesome 0.6.0 py_0 qtconsole 4.5.5 py_0 qtpy 1.9.0 py_0 requests 2.22.0 py36_0 rope 0.14.0 py_0 scipy 1.3.1 py36h29ff71c_0 setuptools 39.1.0 pypi_0 pypi sip 4.19.8 py36h6538335_0 six 1.13.0 pypi_0 pypi snowballstemmer 2.0.0 py_0 sphinx 2.2.1 py_0 sphinxcontrib-applehelp 1.0.1 py_0 sphinxcontrib-devhelp 1.0.1 py_0 sphinxcontrib-htmlhelp 1.0.2 py_0 sphinxcontrib-jsmath 1.0.1 py_0 sphinxcontrib-qthelp 1.0.2 py_0 sphinxcontrib-serializinghtml 1.1.3 py_0 spyder 3.3.6 py36_0 spyder-kernels 0.5.2 py36_0 sqlite 3.30.1 he774522_0 sqlparse 0.3.0 pypi_0 pypi tensorboard 1.10.0 py36he025d50_0 tensorflow 1.10.0 gpu_py36h3514669_0 tensorflow-base 1.10.0 gpu_py36h6e53903_0 tensorflow-gpu 1.10.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi testpath 0.4.2 py36_0 tornado 6.0.3 py36he774522_0 traitlets 4.3.3 py36_0 typed-ast 1.4.0 py36he774522_0 urllib3 1.25.6 pypi_0 pypi vc 14.1 h0510ff6_4 vs2015_runtime 14.16.27012 hf0eaf9b_0 wcwidth 0.1.7 py36h3d5aa90_0 webencodings 0.5.1 py36_1 werkzeug 0.16.0 py_0 wheel 0.33.6 py36_0 win_inet_pton 1.1.0 py36_0 wincertstore 0.2 py36h7fe50ca_0 wrapt 1.11.2 py36he774522_0 yaml 0.1.7 hc54c509_2 zeromq 4.3.1 h33f27b4_3 zipp 0.6.0 py_0 zlib 1.2.11 h62dcd97_3 ``` 2*RTX2080Ti driver 4.19.67
python tornado编程时页面如何连续输出
tornado写服务时,某个请求耗时较长,并且有有持续的输出,在get()中的write()函数只在get()函数执行完之后才会在页面打印相应的内容,如何实现将执行过程中的输出实时的打印出页面? ``` class TestHandler(tornado.web.RequestHandler): def get(self): import time for i in range(10): self.write('this is ime test') time.sleep(1) ``` 我希望的结果是页面上也是每隔一秒打印一次,可实际结果是等10s后页面会一次性显示所有的字符串,求问怎么解决该问题?
tensorflow 训练数据集时,报错InvalidArgumentError: Incompatible shapes: [15] vs. [15,6],标签的占位符与标签喂的数据格式不符,要怎么解决?
InvalidArgumentError (see above for traceback): Incompatible shapes: [15] vs. [15,6] 报错的详细信息如下所示: ``` INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.CancelledError'>, Enqueue operation was cancelled [[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input_producer, input_producer/RandomShuffle)]] Caused by op 'input_producer/input_producer_EnqueueMany', defined at: File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance app.start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 477, in start ioloop.IOLoop.instance().start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 888, in start handler_func(fd_obj, events) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events self._handle_recv() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell handler(stream, idents, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2802, in run_ast_nodes if self.run_code(code, result): File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-19-6fa659dba762>", line 320, in <module> batch_test(data_path, 100, 100, n_batch, train_op, loss, acc, range_num, val_batch) File "<ipython-input-19-6fa659dba762>", line 147, in batch_test tf_image,tf_label = read_records(record_file,resize_height,resize_width,type='normalization') File "<ipython-input-19-6fa659dba762>", line 84, in read_records filename_queue = tf.train.string_input_producer([filename]) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\training\input.py", line 232, in string_input_producer cancel_op=cancel_op) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\training\input.py", line 164, in input_producer enq = q.enqueue_many([input_tensor]) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\data_flow_ops.py", line 367, in enqueue_many self._queue_ref, vals, name=scope) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_data_flow_ops.py", line 1556, in _queue_enqueue_many_v2 name=name) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__ self._traceback = _extract_stack() CancelledError (see above for traceback): Enqueue operation was cancelled [[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input_producer, input_producer/RandomShuffle)]] --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1038 try: -> 1039 return fn(*args) 1040 except errors.OpError as e: H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1020 feed_dict, fetch_list, target_list, -> 1021 status, run_metadata) 1022 H:\aa\Anaconda\anaconda\envs\tensorflow\lib\contextlib.py in __exit__(self, type, value, traceback) 87 try: ---> 88 next(self.gen) 89 except StopIteration: H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py in raise_exception_on_not_ok_status() 465 compat.as_text(pywrap_tensorflow.TF_Message(status)), --> 466 pywrap_tensorflow.TF_GetCode(status)) 467 finally: InvalidArgumentError: Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] During handling of the above exception, another exception occurred: InvalidArgumentError Traceback (most recent call last) <ipython-input-19-6fa659dba762> in <module>() 318 range_num = 5 319 --> 320 batch_test(data_path, 100, 100, n_batch, train_op, loss, acc, range_num, val_batch) 321 <ipython-input-19-6fa659dba762> in batch_test(record_file, resize_height, resize_width, n_batch, train_op, loss, acc, range_num, val_batch) 187 images_x = np.reshape(images, (-1, 30000)) 188 labels_y = np.reshape(labels, (-1, 6)) --> 189 _,err,ac = sess.run([train_op,loss,acc],feed_dict={x:images, y_:labels_y}) # 50% 神经元在工作中 190 train_loss = train_loss + err 191 train_acc = train_acc + ac H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata) 776 try: 777 result = self._run(None, fetches, feed_dict, options_ptr, --> 778 run_metadata_ptr) 779 if run_metadata: 780 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 980 if final_fetches or final_targets: 981 results = self._do_run(handle, final_targets, final_fetches, --> 982 feed_dict_string, options, run_metadata) 983 else: 984 results = [] H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1030 if handle is None: 1031 return self._do_call(_run_fn, self._session, feed_dict, fetch_list, -> 1032 target_list, options, run_metadata) 1033 else: 1034 return self._do_call(_prun_fn, self._session, handle, feed_dict, H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1050 except KeyError: 1051 pass -> 1052 raise type(e)(node_def, op, message) 1053 1054 def _extend_graph(self): InvalidArgumentError: Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] Caused by op 'Equal', defined at: File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance app.start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 477, in start ioloop.IOLoop.instance().start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 888, in start handler_func(fd_obj, events) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events self._handle_recv() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell handler(stream, idents, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2802, in run_ast_nodes if self.run_code(code, result): File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-19-6fa659dba762>", line 311, in <module> correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.float32), y_) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 672, in equal result = _op_def_lib.apply_op("Equal", x=x, y=y, name=name) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] ``` x,y- 占位符打印的信息如下: ``` x: Tensor("x-input:0", shape=(?, 100, 100, 3), dtype=float32) y_:Tensor("y_:0", shape=(?, 6), dtype=float32) ``` image 和 labels 的打印信息如下: ``` shape:(15, 100, 100, 3),tpye:float32,labels:[[ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 0. 1. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 1.] [ 0. 0. 1. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 1. 0.] [ 0. 0. 0. 0. 1. 0.]] ``` 整个运行的代码如下: ``` import tensorflow as tf import numpy as np import os import cv2 import matplotlib.pyplot as plt import random import time from PIL import Image os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' data_path = 'people_pictures_train/record/one_train_demo_people_train.tfrecords' # tfrecords 文件的地址 data_path_val = 'people_pictures_train/record/one_test_demo_people_val.tfrecords' # tfrecords 文件的地址 print("----------------------------") tf.reset_default_graph() def get_example_nums(tf_records_filenames): ''' 统计tf_records图像的个数(example)个数 :param tf_records_filenames: tf_records文件路径 :return: ''' nums= 0 for record in tf.python_io.tf_record_iterator(tf_records_filenames): nums += 1 return nums def show_image(title,image): ''' 显示图片 :param title: 图像标题 :param image: 图像的数据 :return: ''' # plt.figure("show_image") # print(image.dtype) plt.imshow(image) plt.axis('on') # 关掉坐标轴为 off plt.title(title) # 图像题目 plt.show() def get_batch_images(images,labels,batch_size,labels_nums,one_hot=False,shuffle=False,num_threads=1): ''' :param images:图像 :param labels:标签 :param batch_size: :param labels_nums:标签个数 :param one_hot:是否将labels转为one_hot的形式 :param shuffle:是否打乱顺序,一般train时shuffle=True,验证时shuffle=False :return:返回batch的images和labels ''' min_after_dequeue = 200 capacity = min_after_dequeue + 3 * batch_size # 保证capacity必须大于min_after_dequeue参数值 if shuffle: images_batch, labels_batch = tf.train.shuffle_batch([images,labels], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue, num_threads=num_threads) else: images_batch, labels_batch = tf.train.batch([images,labels], batch_size=batch_size, capacity=capacity, num_threads=num_threads) if one_hot: labels_batch = tf.one_hot(labels_batch, labels_nums, 1, 0) return images_batch,labels_batch def read_records(filename,resize_height, resize_width,type=None): ''' 解析record文件:源文件的图像数据是RGB,uint8,[0,255],一般作为训练数据时,需要归一化到[0,1] :param filename: :param resize_height: :param resize_width: :param type:选择图像数据的返回类型 None:默认将uint8-[0,255]转为float32-[0,255] normalization:归一化float32-[0,1] standardization:归一化float32-[0,1],再减均值中心化 :return: ''' # 创建文件队列,不限读取的数量 filename_queue = tf.train.string_input_producer([filename]) # create a reader from file queue reader = tf.TFRecordReader() # reader从文件队列中读入一个序列化的样本 _, serialized_example = reader.read(filename_queue) # get feature from serialized example # 解析符号化的样本 features = tf.parse_single_example( serialized_example, features={ 'image_raw': tf.FixedLenFeature([], tf.string), 'height': tf.FixedLenFeature([], tf.int64), 'width': tf.FixedLenFeature([], tf.int64), 'depth': tf.FixedLenFeature([], tf.int64), 'labels': tf.FixedLenFeature([], tf.string) } ) tf_image = tf.decode_raw(features['image_raw'], tf.uint8)#获得图像原始的数据 tf_height = features['height'] tf_width = features['width'] tf_depth = features['depth'] # tf_label = tf.cast(features['labels'], tf.float32) tf_label = tf.decode_raw(features['labels'],tf.float32) # PS:恢复原始图像数据,reshape的大小必须与保存之前的图像shape一致,否则出错 # tf_image=tf.reshape(tf_image, [-1]) # 转换为行向量 tf_image=tf.reshape(tf_image, [resize_height, resize_width, 3]) # 设置图像的维度 tf_label=tf.reshape(tf_label, [6]) # 设置图像的维度 # 恢复数据后,才可以对图像进行resize_images:输入uint->输出float32 # tf_image=tf.image.resize_images(tf_image,[224, 224]) # [3]数据类型处理 # 存储的图像类型为uint8,tensorflow训练时数据必须是tf.float32 if type is None: tf_image = tf.cast(tf_image, tf.float32) elif type == 'normalization': # [1]若需要归一化请使用: # 仅当输入数据是uint8,才会归一化[0,255] # tf_image = tf.cast(tf_image, dtype=tf.uint8) # tf_image = tf.image.convert_image_dtype(tf_image, tf.float32) tf_image = tf.cast(tf_image, tf.float32) * (1. / 255.0) # 归一化 elif type == 'standardization': # 标准化 # tf_image = tf.cast(tf_image, dtype=tf.uint8) # tf_image = tf.image.per_image_standardization(tf_image) # 标准化(减均值除方差) # 若需要归一化,且中心化,假设均值为0.5,请使用: tf_image = tf.cast(tf_image, tf.float32) * (1. / 255) - 0.5 # 中心化 # 这里仅仅返回图像和标签 # return tf_image, tf_height,tf_width,tf_depth,tf_label return tf_image,tf_label def batch_test(record_file,resize_height,resize_width,n_batch,train_op,loss,acc,range_num,val_batch): ''' :param record_file: record文件路径 :param resize_height: :param resize_width: :return: :PS:image_batch, label_batch一般作为网络的输入 ''' # 读取record函数 tf_image,tf_label = read_records(record_file,resize_height,resize_width,type='normalization') image_batch, label_batch= get_batch_images(tf_image,tf_label,batch_size=15,labels_nums=6,one_hot=False,shuffle=True) a = image_batch.get_shape() a2 = a.as_list() b = label_batch.get_shape() b2 = b.as_list() print('image_batch: '+ str(image_batch) + ' label_batch: ' + str(label_batch)) print('image_batch-len:' + str(len(a2)) + ' label_batch-len: ' + str(len(b2))) # 测试的数据 images_val,labels_val = read_records(data_path_val,100,100,type='normalization') image_batch_val, label_batch_val = get_batch_images(images_val,labels_val,batch_size=15,labels_nums=6,one_hot=False,shuffle=True) # print('image_batch_val: '+ str(image_batch_val) + ' label_batch_val: ' + str(label_batch_val)) init = tf.global_variables_initializer() with tf.Session() as sess: # 开始一个会话 sess.run(init) # train_writer = tf.summary.FileWriter('logs/train',sess.graph) # 当前目录下的 logs 文件夹,如果没有这个文件夹,会自己键, 写入graph 的图 # test_writer = tf.summary.FileWriter('logs/test',sess.graph) # 当前目录下的 logs 文件夹,如果没有这个文件夹,会自己键, 写入graph 的图 coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for epoch in range(range_num) : start_time = time.time() train_loss, train_acc = 0,0 for i in range(n_batch): images, labels = sess.run([image_batch, label_batch]) print('shape:{},tpye:{},labels:{}'.format(images.shape,images.dtype,labels)) print('images-len:' + str(len(images)) + ' labels-len: ' + str(len(labels))) for i in range(len(images)): show_image("image0", images[i, :, :, :]) a = np.zeros( (len(labels)) ) print(' a: ' +str(a)) for i in range(len(labels)): for j in range(len(labels[i])): if labels[i][j] > 0: a[i] = j print(' a: ' +str(a)) print('x: ' + str(x) + ' y_:' + str(y_)) images_x = np.reshape(images, (-1, 30000)) labels_y = np.reshape(labels, (-1, 6)) _,err,ac = sess.run([train_op,loss,acc],feed_dict={x:images, y_:labels_y}) # 50% 神经元在工作中 train_loss = train_loss + err train_acc = train_acc + ac print(" train loss: %f" % (np.sum(train_err)/n_batch)) print(" train acc: %f" % (np.sum(train_acc)/n_batch)) val_loss, val_acc = 0, 0 for i in range(val_batch): # test 在会话中取出images和labels测试数据, images_val2 主要是为了与 images_val 进行区分 images_val2, labels_val2 = sess.run([image_batch_val, label_batch_val]) val_loss, val_acc = sess.run([loss,acc], feed_dict={x:images_val_x, y_:labels_val2}) # 测试一下准确率,喂的数据是,图片和图片的标签 val_loss = val_loss + err val_acc = val_acc + ac print(" validation loss: %f" % (np.sum(val_loss)/val_batch)) print(" validation acc: %f" % (np.sum(val_acc)/val_batch)) # 停止所有线程 coord.request_stop() coord.join(threads) # 每个批次的大小 batch_size = 15 #每个批次 10,一次性放入100张图,放到神经网络中进行训练,以矩阵的形式放入 # 计算一共有多少个批次 # n_batch = mnist.train.num_examples // batch_size #整除 n_batch = get_example_nums(data_path) // batch_size val_batch = get_example_nums(data_path_val) // batch_size # 测试图片的数量 转换格式时以一个batch 放所有的图片 # val_num = get_example_nums(data_path_val) # 测试图片的数量 转换格式时以一个batch 放所有的图片 # train_num = get_example_nums(data_path) # 测试图片的数量 转换格式时以一个batch 放所有的图片 print ("-----------------" + str(n_batch) + " batch------------") #将所有的图片resize成100*100 w=100 h=100 c=3 #-----------------构建网络---------------------- #占位符 #-----------------构建网络---------------------- #占位符 x = tf.placeholder(tf.float32,[None,100,100,3],name='x-input') #图片像素 转换 一维向量,行与批次有关,none 代表行,列是784 y_=tf.placeholder(tf.float32,shape=[None,6],name='y_') def inference(input_tensor, train, regularizer): with tf.variable_scope('layer1-conv1'): conv1_weights = tf.get_variable("weight",[5,5,3,32],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0)) conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding='SAME') relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases)) with tf.name_scope("layer2-pool1"): pool1 = tf.nn.max_pool(relu1, ksize = [1,2,2,1],strides=[1,2,2,1],padding="VALID") with tf.variable_scope("layer3-conv2"): conv2_weights = tf.get_variable("weight",[5,5,32,64],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0)) conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding='SAME') relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases)) with tf.name_scope("layer4-pool2"): pool2 = tf.nn.max_pool(relu2, ksize=[1, 2 , 2, 1], strides=[1, 2, 2, 1], padding='VALID') with tf.variable_scope("layer5-conv3"): conv3_weights = tf.get_variable("weight",[3,3,64,128],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv3_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0)) conv3 = tf.nn.conv2d(pool2, conv3_weights, strides=[1, 1, 1, 1], padding='SAME') relu3 = tf.nn.relu(tf.nn.bias_add(conv3, conv3_biases)) with tf.name_scope("layer6-pool3"): pool3 = tf.nn.max_pool(relu3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') with tf.variable_scope("layer7-conv4"): conv4_weights = tf.get_variable("weight",[3,3,128,128],initializer=tf.truncated_normal_initializer(stddev=0.1)) conv4_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0)) conv4 = tf.nn.conv2d(pool3, conv4_weights, strides=[1, 1, 1, 1], padding='SAME') relu4 = tf.nn.relu(tf.nn.bias_add(conv4, conv4_biases)) with tf.name_scope("layer8-pool4"): pool4 = tf.nn.max_pool(relu4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') nodes = 6*6*128 reshaped = tf.reshape(pool4,[-1,nodes]) with tf.variable_scope('layer9-fc1'): fc1_weights = tf.get_variable("weight", [nodes, 1024], initializer=tf.truncated_normal_initializer(stddev=0.1)) if regularizer != None: tf.add_to_collection('losses', regularizer(fc1_weights)) fc1_biases = tf.get_variable("bias", [1024], initializer=tf.constant_initializer(0.1)) fc1 = tf.nn.relu(tf.matmul(reshaped, fc1_weights) + fc1_biases) if train: fc1 = tf.nn.dropout(fc1, 0.5) with tf.variable_scope('layer10-fc2'): fc2_weights = tf.get_variable("weight", [1024, 512], initializer=tf.truncated_normal_initializer(stddev=0.1)) if regularizer != None: tf.add_to_collection('losses', regularizer(fc2_weights)) fc2_biases = tf.get_variable("bias", [512], initializer=tf.constant_initializer(0.1)) fc2 = tf.nn.relu(tf.matmul(fc1, fc2_weights) + fc2_biases) if train: fc2 = tf.nn.dropout(fc2, 0.5) with tf.variable_scope('layer11-fc3'): fc3_weights = tf.get_variable("weight", [512, 6], initializer=tf.truncated_normal_initializer(stddev=0.1)) if regularizer != None: tf.add_to_collection('losses', regularizer(fc3_weights)) fc3_biases = tf.get_variable("bias", [6], initializer=tf.constant_initializer(0.1)) logit = tf.matmul(fc2, fc3_weights) + fc3_biases return logit #---------------------------网络结束--------------------------- regularizer = tf.contrib.layers.l2_regularizer(0.0001) logits = inference(x,False,regularizer) #(小处理)将logits乘以1赋值给logits_eval,定义name,方便在后续调用模型时通过tensor名字调用输出tensor b = tf.constant(value=1,dtype=tf.float32) logits_eval = tf.multiply(logits,b,name='logits_eval') # loss=tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y_) loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=logits) train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss) correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.float32), y_) acc= tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print("----------------------------") if __name__ == '__main__': range_num = 5 batch_test(data_path, 100, 100, n_batch, train_op, loss, acc, range_num, val_batch) ```
TypeError: 'unicode' object is not callable 错误 python2.7
刚学python 写的def的线程都不可用,于是模仿别人单线程直接插多线程,就出现以下错误,写的爬虫,线程是出来了,但是就是不能调用unicode,求大神解答 ``` # -*- coding: utf-8 -* import sys reload(sys) sys.setdefaultencoding('utf8') import requests import re import time import threading import sys import Queue as queue import sys import datetime live = open('未爬.txt','w') die = open('已爬.txt','w') input_queue = queue.Queue() list = raw__input("--> Enter Lists : ") thread = input(" -> Thread : ") link = “************” head = {'User-agent':'Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30'} s = requests.session() g = s.get(link, headers=head) list = open(list, 'r')_ print('') print("-"*50) print("-"*50) while True: 网页导入 = list.readline().replace('\n','') if not www: continue bacot = email.strip().split(':') xxx = {''************''} cek = s.post(link, headers=head, data=xxx).text if "************" in cek: print("|未爬|----->"+网页+"") live.write(网页+"\n") else: print("|已爬 | -----> "+网页+" ") die.write(网页+"\n") for x in range(int(thread)): t = threading.Thread(target=cek) t.setDaemon(True) t.start() print('') print('-------------------------------------------------') print('')_ ```
python3.8用pyinstaller3.5导不出exe提示TypeError: an integer is required (got type bytes)
我开始以为是我代码出问题, 后来发先即使我的代码是 print ("Hello World!") 都会出现相同的问题 import requests url="https://ar.**d.cn/users/20000" headers = {"Authorization": "Basic NDM1NzY1OTU6ckgyNzdadXpaZi96VUh4b1g3bVZNQkpNMmtUZm5XUjF2ZEdwNWhlVDlDRVlzMjMvV2VBeUJaQWtyR0h2NHcvb1FTTFJjeWNxc1h******UlsS0ZpVEh5TVM5WW96cjR1SURoNkhqSFhLRkNvUWMyZ0kyNUNZTzRXYnM5aUFKRklEMjJXM3lGOE5MeklTYnF0b2g2SXB5QWo0b2FvOUR6KzRHYTlwRGNjamw2S2k3Umw2SUdKZi9Od2ZXSkFsRmJOUnliRzh3T0tZNFEySGpkbHFTSnIxc0pZa0h3TEY4enE4OEt1U3V4TzJzU29j"} r = requests.get(url, headers=headers) print (r.json()) 上面是我之前的代码,测试运行都是没问题的,但是打包exe就出错了 ![图片说明](https://img-ask.csdn.net/upload/201911/07/1573062806_777820.jpg)
为什么在使用catalyst 时候一直有提示错误ImportError: cannot import name 'run_algorithm'?
如题: 以下为我的环境: py 3.6 aiodns==1.1.1 aiohttp==3.5.4 alabaster==0.7.12 alembic==0.9.7 appnope==0.1.0 asn1crypto==0.24.0 astroid==2.2.5 async-timeout==3.0.1 attrdict==2.0.1 attrs==19.1.0 Babel==2.6.0 backcall==0.1.0 bcolz==1.2.1 bleach==3.1.0 boto3==1.5.27 botocore==1.8.50 Bottleneck==1.2.1 cchardet==2.1.1 ccxt==1.17.94 certifi==2019.3.9 cffi==1.12.3 chardet==3.0.4 click==6.7 cloudpickle==1.0.0 contextlib2==0.5.5 cryptography==2.6.1 cycler==0.10.0 cyordereddict==1.0.0 Cython==0.27.3 cytoolz==0.9.0.1 decorator==4.4.0 defusedxml==0.6.0 docutils==0.14 empyrical==0.2.2 enigma-catalyst==0.5.21 entrypoints==0.3 eth-abi==1.3.0 eth-account==0.2.3 eth-hash==0.2.0 eth-keyfile==0.5.1 eth-keys==0.2.2 eth-rlp==0.1.2 eth-typing==2.1.0 eth-utils==1.6.0 hexbytes==0.1.0 idna==2.8 idna-ssl==1.1.0 imagesize==1.1.0 inflection==0.3.1 intervaltree==2.1.0 ipykernel==5.1.0 ipython==7.5.0 ipython-genutils==0.2.0 isort==4.3.19 jedi==0.13.3 Jinja2==2.10.1 jmespath==0.9.4 jsonschema==3.0.1 jupyter-client==5.2.4 jupyter-core==4.4.0 keyring==18.0.0 kiwisolver==1.1.0 lazy-object-proxy==1.4.1 Logbook==0.12.5 lru-dict==1.1.6 lxml==4.3.3 Mako==1.0.7 MarkupSafe==1.1.1 matplotlib==3.1.0 mccabe==0.6.1 mistune==0.8.4 mkl-fft==1.0.12 mkl-random==1.0.2 more-itertools==7.0.0 multidict==4.5.2 multipledispatch==0.4.9 nbconvert==5.5.0 nbformat==4.4.0 networkx==2.1 numexpr==2.6.4 numpy==1.16.0 numpydoc==0.9.1 packaging==19.0 pandas==0.24.2 pandas-datareader==0.6.0 pandocfilters==1.4.2 parsimonious==0.8.1 parso==0.4.0 patsy==0.5.1 pexpect==4.7.0 pickleshare==0.7.5 prompt-toolkit==2.0.9 psutil==5.6.2 ptyprocess==0.6.0 pycares==3.0.0 pycodestyle==2.5.0 pycparser==2.19 pycryptodome==3.8.2 pyflakes==2.1.1 Pygments==2.4.0 pylint==2.3.1 pyOpenSSL==19.0.0 pyparsing==2.4.0 pyrsistent==0.14.11 PySocks==1.7.0 python-dateutil==2.8.0 python-editor==1.0.4 pytz==2019.1 pyzmq==18.0.0 QtAwesome==0.5.7 qtconsole==4.5.1 QtPy==1.7.1 Quandl==3.4.5 redo==2.0.1 requests==2.21.0 requests-file==1.4.3 requests-ftp==0.3.1 requests-toolbelt==0.8.0 rlp==1.1.0 rope==0.14.0 s3transfer==0.1.13 scipy==1.2.1 six==1.12.0 snowballstemmer==1.2.1 sortedcontainers==1.5.9 Sphinx==2.0.1 sphinxcontrib-applehelp==1.0.1 sphinxcontrib-devhelp==1.0.1 sphinxcontrib-htmlhelp==1.0.2 sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.2 sphinxcontrib-serializinghtml==1.1.3 spyder==3.3.4 spyder-kernels==0.4.4 SQLAlchemy==1.2.2 statsmodels==0.9.0 tables==3.4.2 testpath==0.4.2 toolz==0.9.0 tornado==6.0.2 traitlets==4.3.2 typed-ast==1.3.4 typing-extensions==3.7.2 urllib3==1.24.3 wcwidth==0.1.7 web3==4.4.1 webencodings==0.5.1 websockets==5.0.1 wrapt==1.11.1 wurlitzer==1.0.2 yarl==1.1.0 在运行catalyst 的时候会提示: runfile('/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency/trading.py', wdir='/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency') Traceback (most recent call last): File "<ipython-input-10-5dde7acc5e52>", line 1, in <module> runfile('/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency/trading.py', wdir='/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency') File "/Users/mac/miniconda3/envs/catalyst/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "/Users/mac/miniconda3/envs/catalyst/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency/trading.py", line 6, in <module> from catalyst import run_algorithm File "/Users/mac/Desktop/UPF/Master Thesis/py/crypocurrency/catalyst.py", line 1, in <module> from catalyst import run_algorithm ImportError: cannot import name 'run_algorithm' 我在网上找了很久的解决方案但是都没有一个能解决到的。 会不会是因为在安装catalyst的时候就已经出了这个问题所导致的? 以下为我在安装的时候发生的错误。 请各位大神帮帮忙! ERROR: Cannot uninstall 'certifi'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. Note: you may need to restart the kernel to use updated packages.
tornado队列put完后,get方法没有触发
tornado.queues.Queue的队列里put了新的数据,但是没有触发,有人知道为啥吗? @gen.coroutine def sendMsg(self): while 1: logger.info('sendMsg moniter start!') data = yield self.msg_que.get() logger.info('sendMsg moniter rev:%s'%data) try: self.handle_msg(data) except Exception as e: logger.info('sendMsg error:%s' % e) finally: self.msg_que.task_done() 我再rev里加了个断点,发现self.msg_que里是有数据的
tornado motor函数调用另一个异步操作Mongo数据库返回值是Future对象
我用motor在tornado框架下来操作mongodb,按照[官方教学文档](http://motor.readthedocs.org/en/stable/tutorial.html)写的结果正常。 ```python >>> @gen.coroutine ... def do_find_one(): ... document = yield db.test_collection.find_one({'i': {'$lt': 2}}) ... print document ... >>> IOLoop.current().run_sync(do_find_one) {u'i': 0, u'_id': ObjectId('...')} ``` 现在我想实现a.py调用b.py,然后在b.py里面用motor操作mongodb,例如是插入数据,然后返回_id给a.py。我的代码是这样写的: a.py ```python from b import testb from tornado import ioloop from tornado import gen class testa(Object): @gen.coroutine def printa(self): tmp = testb() id = tmp.do_insert() print id a = testa() ioloop.IOLoop.current().run_sync(a.printa) ``` b.py ```python from tornado import gen import motor client = motor.MotorClient('localhost', 27017) db = client.testdb class testb(object): @gen.coroutine def do_insert(self): coll = db.testcoll yield coll.find_one({'bookname': 'huihuang'}) ``` 因为testb加了yield,生成器里面不能用return。我这种写法a.py中print出来的是一个`<tornado.concurrent.Future object at 0x7fa83c900e10>`我不知道怎么获得future里面的数据。 哪位大神帮我看看!或者我哪里理解错了
python中tornado的使用问题
之前用的是java使用的是tomcat,最近在学习python,在使用tornado的时候有个问题, 一直不知道怎么解决,我在Linux服务器启动tomcat那么我在外网就可以输入ip+端口访问, 并且我在关闭xhell脚本的时候tomcat同样是在运行的,可是tornado我写了一hello world的demo 启动以后外网也是可以访问的但是当我关闭xhsell的时候tornado就关闭了,是需要配置什么东西吗? 代码如下: ![图片说明](https://img-ask.csdn.net/upload/201805/07/1525700422_216465.png)
python+tornado长连接如何实现推送消息实时更新到界面上
看过tornado的教程,里面有长连接实时更新数据到界面,但都是点击按钮后发送post请求,在post请求中遍历了回调函数,使数据实时更新。但是我现在数据是通过python写了一个回调函数给C调用,C端接收数据后调用我的回调函数解析,C调用我的回调函数解析后我需要将数据推送到界面上,我改如何实现,现在在我的python回调函数中遍历界面显示的回调函数报如下的错![图片说明](https://img-ask.csdn.net/upload/201908/09/1565335531_550708.png)
Linux下的python的pip问题
今天我安装了python3 然后pip安装tornado,测试发现导入不成功,通过find命令查到pip把tornado安装到python2.7里面去了 ![图片说明](https://img-ask.csdn.net/upload/201804/29/1525006422_521904.png) 通过搜索命令发现在python3.6里面有一个pip,于是我到usr/bin目录下面删除了pip的软连接 然后设置python3.6的pip软连接发现pip用不了 ![图片说明](https://img-ask.csdn.net/upload/201804/29/1525006713_105393.png)
启动pyspider时,一直卡在result_worker starting...,应该怎么解决
**启动pyspider,一直卡在result__worker starting不往下运行** ``` Microsoft Windows [版本 10.0.17763.678] (c) 2018 Microsoft Corporation。保留所有权利。 C:\Users\zhihe>pyspider all c:\users\zhihe\appdata\local\programs\python\python37\lib\site-packages\pyspider\libs\utils.py:196: FutureWarning: timeout is not supported on your platform. warnings.warn("timeout is not supported on your platform.", FutureWarning) phantomjs fetcher running on port 25555 [I 190821 00:46:03 result_worker:49] result_worker starting... ``` 网上找的到的解决方法都是:关闭防火墙。但是我关闭了防火墙仍然没有效果。 **我的Python版本:** ``` Python 3.7.4 (tags/v3.7.4:e09359112e, Jul 8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)] on win32 ``` **Pyspider版本:** ``` C:\Users\zhihe>pip3 show pyspider Name: pyspider Version: 0.3.10 Summary: A Powerful Spider System in Python Home-page: https://github.com/binux/pyspider Author: Roy Binux Author-email: roy@binux.me License: Apache License, Version 2.0 Location: c:\users\zhihe\appdata\local\programs\python\python37\lib\site-packages Requires: chardet, Jinja2, tblib, u-msgpack-python, six, click, tornado, lxml, pycurl, requests, wsgidav, Flask, cssselect, pyquery, Flask-Login ``` **Pycurl版本:** ``` C:\Users\zhihe>pip3 show pycurl Name: pycurl Version: 7.43.0.3 Summary: PycURL -- A Python Interface To The cURL library Home-page: http://pycurl.io/ Author: Kjetil Jacobsen, Markus F.X.J. Oberhumer, Oleg Pudeyev Author-email: kjetilja at gmail.com, markus at oberhumer.com, oleg at bsdpower.com License: LGPL/MIT Location: c:\users\zhihe\appdata\local\programs\python\python37\lib\site-packages Requires: Required-by: pyspider ``` 替换关键字的三个文件都进行替换了,忘各位大拿老师解惑
python执行tornado报错
今天装了tornado 写了个测试demo代码如下 ![图片说明](https://img-ask.csdn.net/upload/201804/29/1525007168_382510.png) 在Windows下执行时没有问题的在Linux下执行提示没有zlib模块于是我下载了zlib包执行报错如下 ![图片说明](https://img-ask.csdn.net/upload/201804/29/1525007401_668393.png) 下载了好几个zlib包都报这个错
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
Python 植物大战僵尸代码实现(2):植物卡片选择和种植
这篇文章要介绍的是: - 上方植物卡片栏的实现。 - 点击植物卡片,鼠标切换为植物图片。 - 鼠标移动时,判断当前在哪个方格中,并显示半透明的植物作为提示。
死磕YOLO系列,YOLOv1 的大脑、躯干和手脚
YOLO 是我非常喜欢的目标检测算法,堪称工业级的目标检测,能够达到实时的要求,它帮我解决了许多实际问题。 这就是 YOLO 的目标检测效果。它定位了图像中物体的位置,当然,也能预测物体的类别。 之前我有写博文介绍过它,但是每次重新读它的论文,我都有新的收获,为此我准备写一个系列的文章来详尽分析它。这是第一篇,从它的起始 YOLOv1 讲起。 YOLOv1 的论文地址:https://www.c
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
记一次腾讯面试:进程之间究竟有哪些通信方式?如何通信? ---- 告别死记硬背
有一次面试的时候,被问到进程之间有哪些通信方式,不过由于之前没深入思考且整理过,说的并不好。想必大家也都知道进程有哪些通信方式,可是我猜很多人都是靠着”背“来记忆的,所以今天的这篇文章,讲给大家详细着讲解他们是如何通信的,让大家尽量能够理解他们之间的区别、优缺点等,这样的话,以后面试官让你举例子,你也能够顺手拈来。 1、管道 我们来看一条 Linux 的语句 netstat -tulnp | gr...
20行Python代码爬取王者荣耀全英雄皮肤
引言 王者荣耀大家都玩过吧,没玩过的也应该听说过,作为时下最火的手机MOBA游戏,咳咳,好像跑题了。我们今天的重点是爬取王者荣耀所有英雄的所有皮肤,而且仅仅使用20行Python代码即可完成。 准备工作 爬取皮肤本身并不难,难点在于分析,我们首先得得到皮肤图片的url地址,话不多说,我们马上来到王者荣耀的官网: 我们点击英雄资料,然后随意地选择一位英雄,接着F12打开调试台,找到英雄原皮肤的图片
网络(8)-HTTP、Socket、TCP、UDP的区别和联系
TCP/IP协议是传输层协议,主要解决数据如何在网络中传输,而HTTP是应用层协议,主要解决如何包装数据。 一、TCP与UDP的不同 1. 是否需要建立连接。 UDP在传送数据之前不需要先建立连接;TCP则提供面向连接的服务; 2. 是否需要给出确认 对方的传输层在收到UDP报文后,不需要给出任何确认,而 TCP需要给出确认报文,要提供可靠的、面向连接的传输服务。 3.虽然UDP不提供可靠交...
简明易理解的@SpringBootApplication注解源码解析(包含面试提问)
欢迎关注文章系列 ,关注我 《提升能力,涨薪可待》 《面试知识,工作可待》 《实战演练,拒绝996》 欢迎关注我博客,原创技术文章第一时间推出 也欢迎关注公 众 号【Ccww笔记】,同时推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《提升能力,涨薪可待篇》- @SpringBootApplication注解源码解析 一、@SpringBootApplication 的作用是什
防劝退!数据结构和算法难理解?可视化动画带你轻松透彻理解!
大家好,我是 Rocky0429,一个连数据结构和算法都不会的蒟蒻… 学过数据结构和算法的都知道这玩意儿不好学,没学过的经常听到这样的说法还没学就觉得难,其实难吗?真难! 难在哪呢?当年我还是个小蒟蒻,初学数据结构和算法的时候,在忍着枯燥看完定义原理,之后想实现的时候,觉得它们的过程真的是七拐八绕,及其难受。 在简单的链表、栈和队列这些我还能靠着在草稿上写写画画理解过程,但是到了数论、图...
西游记团队中如果需要裁掉一个人,会先裁掉谁?
2019年互联网寒冬,大批企业开始裁员,下图是网上流传的一张截图: 裁员不可避免,那如何才能做到不管大环境如何变化,自身不受影响呢? 我们先来看一个有意思的故事,如果西游记取经团队需要裁员一名,会裁掉谁呢,为什么? 西游记团队组成: 1.唐僧 作为团队teamleader,有很坚韧的品性和极高的原则性,不达目的不罢休,遇到任何问题,都没有退缩过,又很得上司支持和赏识(直接得到唐太宗的任命,既给
开挂的人生!那些当选院士,又是ACM/IEEE 双料Fellow的华人学者们
昨日,2019年两院院士正式官宣,一时间抢占了各大媒体头条。 朋友圈也是一片沸腾,奔走相告,赶脚比自己中了大奖还嗨皮! 谁叫咱家导师就是这么厉害呢!!! 而就在最近,新一年度的IEEE/ACM Fellow也将正式公布。 作为学术届的顶级荣誉,不自然地就会将院士与Fellow作比较,到底哪个含金量更高呢? 学术君认为,同样是专业机构对学者的认可,考量标准不一,自然不能一概而论。 但...
聊聊C语言和指针的本质
坐着绿皮车上海到杭州,24块钱,很宽敞,在火车上非正式地聊几句。 很多编程语言都以 “没有指针” 作为自己的优势来宣传,然而,对于C语言,指针却是与生俱来的。 那么,什么是指针,为什么大家都想避开指针。 很简单, 指针就是地址,当一个地址作为一个变量存在时,它就被叫做指针,该变量的类型,自然就是指针类型。 指针的作用就是,给出一个指针,取出该指针指向地址处的值。为了理解本质,我们从计算机模型说起...
Python语言高频重点汇总
Python语言高频重点汇总 GitHub面试宝典仓库——点这里跳转 文章目录Python语言高频重点汇总**GitHub面试宝典仓库——点这里跳转**1. 函数-传参2. 元类3. @staticmethod和@classmethod两个装饰器4. 类属性和实例属性5. Python的自省6. 列表、集合、字典推导式7. Python中单下划线和双下划线8. 格式化字符串中的%和format9.
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
代码详解:如何用Python快速制作美观、炫酷且有深度的图表
全文共12231字,预计学习时长35分钟生活阶梯(幸福指数)与人均GDP(金钱)正相关的正则图本文将探讨三种用Python可视化数据的不同方法。以可视化《2019年世界幸福报告》的数据为例,本文用Gapminder和Wikipedia的信息丰富了《世界幸福报告》数据,以探索新的数据关系和可视化方法。《世界幸福报告》试图回答世界范围内影响幸福的因素。报告根据对“坎特里尔阶梯问题”的回答来确定幸...
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
(经验分享)作为一名普通本科计算机专业学生,我大学四年到底走了多少弯路
今年正式步入了大四,离毕业也只剩半年多的时间,回想一下大学四年,感觉自己走了不少弯路,今天就来分享一下自己大学的学习经历,也希望其他人能不要走我走错的路。 (一)初进校园 刚进入大学的时候自己完全就相信了高中老师的话:“进入大学你们就轻松了”。因此在大一的时候自己学习的激情早就被抛地一干二净,每天不是在寝室里玩游戏就是出门游玩,不过好在自己大学时买的第一台笔记本性能并不是很好,也没让我彻底沉...
如何写一篇技术博客,谈谈我的看法
前言 只有光头才能变强。 文本已收录至我的GitHub精选文章,欢迎Star:https://github.com/ZhongFuCheng3y/3y 我一直推崇学技术可以写技术博客去沉淀自己的知识,因为知识点实在是太多太多了,通过自己的博客可以帮助自己快速回顾自己学过的东西。 我最开始的时候也是只记笔记,认为自己能看得懂就好。但如果想验证自己是不是懂了,可以写成技术博客。在写技术博客的...
字节跳动面试官这样问消息队列:分布式事务、重复消费、顺序消费,我整理了一下
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
面试还搞不懂redis,快看看这40道面试题(含答案和思维导图)
Redis 面试题 1、什么是 Redis?. 2、Redis 的数据类型? 3、使用 Redis 有哪些好处? 4、Redis 相比 Memcached 有哪些优势? 5、Memcache 与 Redis 的区别都有哪些? 6、Redis 是单进程单线程的? 7、一个字符串类型的值能存储最大容量是多少? 8、Redis 的持久化机制是什么?各自的优缺点? 9、Redis 常见性...
大学四年自学走来,这些珍藏的「实用工具/学习网站」我全贡献出来了
知乎高赞:文中列举了互联网一线大厂程序员都在用的工具集合,涉及面非常广,小白和老手都可以进来看看,或许有新收获。
互联网公司的裁员,能玩出多少种花样?
裁员,也是一门学问,可谓博大精深!以下,是互联网公司的裁员的多种方法:-正文开始-135岁+不予续签的理由:千禧一代网感更强。95后不予通过试用期的理由:已婚已育员工更有责任心。2通知接下来要过苦日子,让一部分不肯同甘共苦的员工自己走人,以“兄弟”和“非兄弟”来区别员工。3强制996。员工如果平衡不了工作和家庭,可在离婚或离职里二选一。4不布置任何工作,但下班前必须提交千字工作日报。5不给活干+...
【设计模式】单例模式的八种写法分析
网上泛滥流传单例模式的写法种类,有说7种的,也有说6种的,当然也不排除说5种的,他们说的有错吗?其实没有对与错,刨根问底,写法终究是写法,其本质精髓大体一致!因此完全没必要去追究写法的多少,有这个时间还不如跟着宜春去网吧偷耳机、去田里抓青蛙得了,一天天的....
《面试宝典》:检验是否为合格的初中级程序员的面试知识点,你都知道了吗?查漏补缺
欢迎关注文章系列,一起学习 《提升能力,涨薪可待篇》 《面试知识,工作可待篇》 《实战演练,拒绝996篇》 也欢迎关注公 众 号【Ccww笔记】,原创技术文章第一时间推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《面试知识,工作可待篇》-Java笔试面试基础知识大全 前言 是不是感觉找工作面试是那么难呢? 在找工作面试应在学习的基础进行总结面试知识点,工作也指日可待,欢...
关于研发效能提升的思考
研发效能提升是最近比较热门的一个话题,本人根据这几年的工作心得,做了一些思考总结,由于个人深度有限,暂且抛转引入。 三要素 任何生产力的提升都离不开这三个因素:人、流程和工具,少了其中任何一个因素都无法实现。 人,即思想,也就是古人说的“道”,道不同不相为谋,是制高点,也是高层建筑的基石。 流程,即方法,也是古人说的“法”。研发效能的提升,也就是要提高投入产出比,既要增加产出,也要减...
微博推荐算法简述
在介绍微博推荐算法之前,我们先聊一聊推荐系统和推荐算法。有这样一些问题:推荐系统适用哪些场景?用来解决什么问题、具有怎样的价值?效果如何衡量? 推荐系统诞生很早,但真正被大家所重视,缘起于以”facebook”为代表的社会化网络的兴起和以“淘宝“为代表的电商的繁荣,”选择“的时代已经来临,信息和物品的极大丰富,让用户如浩瀚宇宙中的小点,无所适从。推荐系统迎来爆发的机会,变得离用户更近: 快...
相关热词 c#如何定义数组列表 c#倒序读取txt文件 java代码生成c# c# tcp发送数据 c#解决时间格式带星期 c#类似hashmap c#设置istbox的值 c#获取多线程返回值 c# 包含数字 枚举 c# timespan
立即提问