ValueError: No data files found in satellite/data\satellite_train_*.tfrecord

跟着书做"打造自己的图片识别模型"项目时候遇到报错,报错早不到数据文件,但是文件路径和数据都没问题

D:\Anaconda\anaconda\envs\tensorflow\python.exe D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py --train_dir=satellite/train_dir --dataset_name=satellite --dataset_split_name=train --dataset_dir=satellite/data --model_name=inception_v3 --checkpoint_path=satellite/pretrained/inception_v3.ckpt --checkpoint_exclude_scopes=InceptionV3/Logits,InceptionV3/AuxLogits --trainable_scopes=InceptionV3/Logits,InceptionV3/AuxLogits --max_number_of_steps=100000 --batch_size=32 --learning_rate=0.001 --learning_rate_decay_type=fixed --save_interval_secs=300 --save_summaries_secs=2 --log_every_n_steps=10 --optimizer=rmsprop --weight_decay=0.00004
WARNING:tensorflow:From D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py:397: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.create_global_step
Traceback (most recent call last):
File "D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py", line 572, in
tf.app.run()
File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py", line 430, in main
common_queue_min=10 * FLAGS.batch_size)
File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\dataset_data_provider.py", line 94, in __init
_
scope=scope)
File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\parallel_reader.py", line 238, in parallel_read
data_files = get_data_files(data_sources)
File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\parallel_reader.py", line 311, in get_data_files
raise ValueError('No data files found in %s' % (data_sources,))
ValueError: No data files found in satellite/data\satellite_train_*.tfrecord

weixin_43106248
weixin_43106248 回复Amanda-hb: 查看一下你satellite.py中的_FILE_PATTERN等是否写错了。_FILE_PATTERN = 'satellite_%s_*.tfrecord' SPLITS_TO_SIZES = {'train': 4800, 'validation': 1200} _NUM_CLASSES = 6
4 个月之前 回复
qq_39060327
Amanda-hb 请问您这个问题最终是怎么解决的?我也遇到了同样的问题,一直解决不了。
4 个月之前 回复

2个回答

为什么你的路径一会儿正斜杠一会儿反斜杠,检查下。还有你的文件名satellite_train_某某.tfrecord是不是这样的。用绝对路径看看

WellTung_666
WellTung_666 应该不是路径问题,路径问题应该是报错没有这文件的。 说没有找到数据文件,但是我的文件命名和路径都没有错。
10 个月之前 回复

执行报错为什么呢

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/slim/python/slim/data/parallel_reader.py:242: string_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/training/input.py:278: input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...).
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/training/input.py:190: limit_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensors(tensor).repeat(num_epochs).
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/training/input.py:199: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the tf.data module.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/training/input.py:199: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the tf.data module.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/training/input.py:202: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/slim/python/slim/data/parallel_reader.py:94: TFRecordReader.__init__ (from tensorflow.python.ops.io_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.TFRecordDataset.
WARNING:tensorflow:From /Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/preprocessing/inception_preprocessing.py:148: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
seed2 arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
WARNING:tensorflow:From train_image_classifier.py:458: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version.
Instructions for updating:
Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.batch(batch_size) (or padded_batch(...) if dynamic_pad=True).
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/layers/core.py:143: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use rate instead of keep_prob. Rate should be set to rate = 1 - keep_prob.
WARNING:tensorflow:From train_image_classifier.py:479: softmax_cross_entropy (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.softmax_cross_entropy instead. Note that the order of the logits and labels arguments has been changed.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:370: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:

Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.

See tf.nn.softmax_cross_entropy_with_logits_v2.

WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:371: compute_weighted_loss (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.compute_weighted_loss instead.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:151: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/losses/python/losses/loss_ops.py:120: add_arg_scope..func_with_args (from tensorflow.contrib.losses.python.losses.loss_ops) is deprecated and will be removed after 2016-12-30.
Instructions for updating:
Use tf.losses.add_loss instead.
INFO:tensorflow:Fine-tuning from satellite/pretrained/inception_v3.ckpt
WARNING:tensorflow:From /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/slim/python/slim/learning.py:737: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.MonitoredTrainingSession
2019-06-06 14:05:13.877400: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
INFO:tensorflow:Error reported to Coordinator: , Cannot assign a device for operation InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss: node InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss (defined at /Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py:104) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[node InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss (defined at /Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py:104) ]]

Caused by op 'InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss', defined at:
File "train_image_classifier.py", line 590, in
tf.app.run()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 125, in run
sys.exit(main(argv))
File "train_image_classifier.py", line 487, in main
clones = model_deploy.create_clones(deploy_config, clone_fn, [batch_queue])
File "/Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/deployment/model_deploy.py", line 193, in create_clones
outputs = model_fn(*args, **kwargs)
File "train_image_classifier.py", line 470, in clone_fn
logits, end_points = network_fn(images)
File "/Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/nets_factory.py", line 155, in network_fn
**kwargs)
File "/Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py", line 490, in inception_v3
depth_multiplier=depth_multiplier)
File "/Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py", line 104, in inception_v3_base
net = slim.conv2d(inputs, depth(32), [3, 3], stride=2, scope=end_point)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args
return func(*args, **current_args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1155, in convolution2d
conv_dims=2)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args
return func(*args, **current_args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1058, in convolution
outputs = layer.apply(inputs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1227, in apply
return self.
_call__(inputs, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/layers/base.py", line 530, in call
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 538, in call
self._maybe_build(inputs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1603, in maybe_build
self.build(input_shapes)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/layers/convolutional.py", line 165, in build
dtype=self.dtype)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/layers/base.py", line 440, in add_weight
self._handle_weight_regularization(name, variable, regularizer)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1333, in _handle_weight_regularization
self.add_loss(functools.partial(_loss_for_variable, variable))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/layers/base.py", line 273, in add_loss
loss_tensor = regularizer()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 710, in _tag_unconditional
loss = loss()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1326, in _loss_for_variable
regularization = regularizer(v)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/layers/python/layers/regularizers.py", line 107, in l2
return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 4615, in l2_loss
"L2Loss", t=t, name=name)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1801, in __init
_
self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): Cannot assign a device for operation InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss: node InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss (defined at /Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py:104) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[node InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss (defined at /Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py:104) ]]

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1317, in _run_fn
self._extend_graph()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1352, in _extend_graph
tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss: {{node InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss}}was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[{{node InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train_image_classifier.py", line 590, in
tf.app.run()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 125, in run
sys.exit(main(argv))
File "train_image_classifier.py", line 586, in main
sync_optimizer=optimizer if FLAGS.sync_replicas else None)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/slim/python/slim/learning.py", line 748, in train
master, start_standard_services=False, config=session_config) as sess:
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/contextlib.py", line 112, in __enter
_
return next(self.gen)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/training/supervisor.py", line 1004, in managed_session
self.stop(close_summary_writer=close_summary_writer)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/training/supervisor.py", line 832, in stop
ignore_live_threads=ignore_live_threads)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/training/coordinator.py", line 389, in join
six.reraise(*self._exc_info_to_raise)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/training/supervisor.py", line 993, in managed_session
start_standard_services=start_standard_services)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/training/supervisor.py", line 730, in prepare_or_wait_for_session
init_feed_dict=self._init_feed_dict, init_fn=self._init_fn)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/training/session_manager.py", line 287, in prepare_session
sess.run(init_op, feed_dict=init_feed_dict)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss: node InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss (defined at /Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py:104) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[node InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss (defined at /Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py:104) ]]

Caused by op 'InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss', defined at:
File "train_image_classifier.py", line 590, in
tf.app.run()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 125, in run
sys.exit(main(argv))
File "train_image_classifier.py", line 487, in main
clones = model_deploy.create_clones(deploy_config, clone_fn, [batch_queue])
File "/Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/deployment/model_deploy.py", line 193, in create_clones
outputs = model_fn(*args, **kwargs)
File "train_image_classifier.py", line 470, in clone_fn
logits, end_points = network_fn(images)
File "/Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/nets_factory.py", line 155, in network_fn
**kwargs)
File "/Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py", line 490, in inception_v3
depth_multiplier=depth_multiplier)
File "/Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py", line 104, in inception_v3_base
net = slim.conv2d(inputs, depth(32), [3, 3], stride=2, scope=end_point)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args
return func(*args, **current_args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1155, in convolution2d
conv_dims=2)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 182, in func_with_args
return func(*args, **current_args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1058, in convolution
outputs = layer.apply(inputs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1227, in apply
return self.
_call__(inputs, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/layers/base.py", line 530, in call
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 538, in call
self._maybe_build(inputs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1603, in maybe_build
self.build(input_shapes)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/layers/convolutional.py", line 165, in build
dtype=self.dtype)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/layers/base.py", line 440, in add_weight
self._handle_weight_regularization(name, variable, regularizer)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1333, in _handle_weight_regularization
self.add_loss(functools.partial(_loss_for_variable, variable))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/layers/base.py", line 273, in add_loss
loss_tensor = regularizer()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 710, in _tag_unconditional
loss = loss()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1326, in _loss_for_variable
regularization = regularizer(v)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/layers/python/layers/regularizers.py", line 107, in l2
return standard_ops.multiply(my_scale, nn.l2_loss(weights), name=name)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 4615, in l2_loss
"L2Loss", t=t, name=name)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
op_def=op_def)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1801, in __init
_
self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): Cannot assign a device for operation InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss: node InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss (defined at /Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py:104) was explicitly assigned to /device:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device. The requested device appears to be a GPU, but CUDA is not enabled.
[[node InceptionV3/InceptionV3/Conv2d_1a_3x3/kernel/Regularizer/l2_regularizer/L2Loss (defined at /Users/jiashihui/Documents/python_namespace/project_21/ImageRecognize/slim/nets/inception_v3.py:104) ]]

ERROR:tensorflow:==================================
Object was never used (type ):

If you want to mark it as used call its "mark_used()" method.
It was originally created here:
File "train_image_classifier.py", line 590, in
tf.app.run() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv)) File "train_image_classifier.py", line 586, in main
sync_optimizer=optimizer if FLAGS.sync_replicas else None) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/contrib/slim/python/slim/learning.py", line 791, in train
should_retry = True File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/util/tf_should_use.py", line 193, in wrapped
return _add_should_use_warning(fn(*args, **kwargs))

Adrenline
Adrenline 看最后几行我们错误差不多!请问解决了吗?
25 天之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
RK3288 make otapackage 报错ValueError: need more than 1 value to unpack
mkbootimg_args = (str) multistage_support = (str) 1 recovery_api_version = (int) 2 selinux_fc = (str) /tmp/targetfiles-WQjmn2/BOOT/RAMDISK/file_contexts system_size = (int) 1610612736 tool_extensions = (str) device/rockchip/rksdk update_rename_support = (str) 1 use_set_metadata = (str) 1 using device-specific extensions in device/rockchip/rksdk building image from target_files RECOVERY... running: mkbootfs -f /tmp/targetfiles-WQjmn2/META/recovery_filesystem_config.txt /tmp/targetfiles-WQjmn2/RECOVERY/RAMDISK running: minigzip running: mkbootimg --kernel /tmp/targetfiles-WQjmn2/RECOVERY/kernel --second /tmp/targetfiles-WQjmn2/RECOVERY/resource.img --ramdisk /tmp/tmpBdTCrB --output /tmp/tmpNnUZoC running: drmsigntool /tmp/tmpNnUZoC build/target/product/security/privateKey.bin src_path: /tmp/tmpNnUZoC, private_key_path: build/target/product/security/privateKey.bin can't open file build/target/product/security/privateKey.bin! no find private key, so not sign boot.img! building image from target_files BOOT... running: mkbootfs -f /tmp/targetfiles-WQjmn2/META/boot_filesystem_config.txt /tmp/targetfiles-WQjmn2/BOOT/RAMDISK running: minigzip running: mkbootimg --kernel /tmp/targetfiles-WQjmn2/BOOT/kernel --second /tmp/targetfiles-WQjmn2/BOOT/resource.img --ramdisk /tmp/tmp6LpDeb --output /tmp/tmppqQcvT running: drmsigntool /tmp/tmppqQcvT build/target/product/security/privateKey.bin src_path: /tmp/tmppqQcvT, private_key_path: build/target/product/security/privateKey.bin can't open file build/target/product/security/privateKey.bin! no find private key, so not sign boot.img! running: imgdiff -b /tmp/targetfiles-WQjmn2/SYSTEM/etc/recovery-resource.dat /tmp/tmpD07dY4 /tmp/tmpXulEpX /tmp/tmp1qudyL Traceback (most recent call last): File "./build/tools/releasetools/ota_from_target_files", line 1059, in <module> main(sys.argv[1:]) File "./build/tools/releasetools/ota_from_target_files", line 1027, in main WriteFullOTAPackage(input_zip, output_zip) File "./build/tools/releasetools/ota_from_target_files", line 502, in WriteFullOTAPackage Item.GetMetadata(input_zip) File "./build/tools/releasetools/ota_from_target_files", line 197, in GetMetadata key, value = element.split("=") ValueError: need more than 1 value to unpack make: *** [out/target/product/rk3288/rk3288-ota-eng.wake.zip] 错误 1
ValueError: Unknown mat file type, version 0, 0
训练模型导入.mat文件时出现如下错误: ``` ValueError: Unknown mat file type, version 0, 0 ``` 读取文件代码为: ``` np.array(sio.loadmat(image[0][i])['section'], dtype=np.float32) ``` 望大神指教!不胜感激!
python错误:ValueError: No JSON object could be decoded
#-*- coding:utf-8 -*- import requests from operator import itemgetter # 执行API调用并存储响应 url = 'http://hacker-news.firebaseio.com/v0/topstories.json' r = requests.get(url) print("Status code:", r.status_code) # 处理有关每篇文章的信息 submission_ids = r.json() submission_dicts = [] for submission_id in submission_ids[:30]: # 对于每篇文章,都执行一个API调用 url = ('http://hacker-news.firebaseio.com/v0/item/' + str(submission_id) + '.json') submission_r = requesets.get(url) print(submisssion_r.status_code) reponse_dict = submission_r.json() submission_dict = { 'title': resopnse_dict['title'], 'link': 'http://news.ycombinator.com/item?id=' + str(submission_id), 'comments': response_dict.get('descendants', 0) } submission_dicts.append(submission_dict) submission_dicts = sorted(submission_dicts, key=itemgetter('comments'), recerse=Ture) for submission_dict in submission_dicts: print("/nTitle:", submission_dict['title']) print("Discussion link:", submission_dict['link']) print("Comeents", submission_dict['comments'])
Keras报错 ‘ValueError: 'pool5' is not in list’
很长的一个project,在keras下实现VGG16。 这是报错的整个代码段: ``` for roi, roi_context in zip(rois, rois_context): ins = [im_in, dmap_in, np.array([roi]), np.array([roi_context])] print("Testing ROI {c}") subtimer.tic() blobs_out = model.predict(ins) subtimer.toc() print("Storing Results") print(layer_names) post_roi_layers = set(layer_names[layer_names.index("pool5"):]) for name, val in zip(layer_names, blobs_out): if name not in outs: outs[name] = val else: if name in post_roi_layers: outs[name] = np.concatenate([outs[name], val]) c += 1 ``` 报错信息: ``` Loading Test Data data is loaded from roidb_test_19_smol.pkl Number of Images to test: 10 Testing ROI {c} Storing Results ['cls_score', 'bbox_pred_3d'] Traceback (most recent call last): File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/main.py", line 6, in <module> results = test_main.test_tf_implementation(cache_file="roidb_test_19_smol.pkl", weights_path="rgbd_det_iter_40000.h5") File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/test_main.py", line 36, in test_tf_implementation results = test.test_net(tf_model, roidb) File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/test.py", line 324, in test_net im_detect_3d(net, im, dmap, test['boxes'], test['boxes_3d'], test['rois_context']) File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/test.py", line 200, in im_detect_3d post_roi_layers = set(layer_names[layer_names.index("pool5"):]) ValueError: 'pool5' is not in list ```
ValueError: too many values to unpack (expected 2)
网上说是元素找不到对应的 代码如下: ``` import turtle file=open("C:/Users/jyz_1/Desktop/新建文本文档.txt") file=file.read() lines=file.split("重庆") i=0 lsy=[] for line in lines: #index the temprature inn=line.index('\n')#The first \n inc=line.index("C")#The first C if i==0: tu=int(line[line.find('\n',inn+1)+1:inc])#The second \n if "~" in line: tl=int(line[line.index('~')+1:line.rindex('C')]) else: tl=tu i=i+1 else: fn=line.find('\n',inn+1) tu=int(line[line.find('\n',fn+1)+1:inc])#The third \n if "~" in line: tl=int(line[line.index('~')+1:line.rindex('C')]) else: tl=tu t=(tl+tu)/2#daily average temprature lsy.append(t) #find the date lsx=[] dates=file.split("\n") for date in dates: if "-" in date: if date.replace("-","").isnumeric()==True: p1=date.index('-')#the first - p2=date.find('-',p1+1)#the second - month=date[p1+1:p2] day=date[p2+1:] date_on_x=int(month+day) lsx.append(date_on_x) #draw axis def drawx(): turtle.pu() turtle.goto(-50,-50) turtle.pd() turtle.fd(240) def drawy(): turtle.pu() turtle.goto(-50,-50) turtle.seth(90) turtle.pd() turtle.fd(160) #comment the axis def comx(): turtle.pu() turtle.goto(-50,-65) turtle.seth(0) for i in range(1,13): turtle.write(i) turtle.fd(20) def comy(): turtle.pu() turtle.goto(-75,-50) turtle.seth(90) for i in range(-30,51,10): turtle.write(float(i)) turtle.fd(20) #draw the rainbow def rainbow(): #define the color if t<8: turtle.color("purple") elif 8<=t<12: turtle.color("lightblue") elif 12<=t<22: turtle.color("green") elif 22<=t<28: turtle.color("yellow") elif 28<=t<30: turtle.color("orange") elif t>=30: turtle.color("red") #let's draw! for x,t in lsx,lsy: turtle.pu() turtle.goto(x,t) turtle.pd() turtle.circle(10) drawx() drawy() comx() comy() rainbow() ``` 报错: ``` Traceback (most recent call last): File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python37-32\32rx.py", line 92, in <module> rainbow(t) File "C:\Users\jyz_1\AppData\Local\Programs\Python\Python37-32\32rx.py", line 83, in rainbow for x,t in lsx,lsy: ValueError: too many values to unpack (expected 2) ``` 但是我用len发现lsx,lsy长度相同 也就是说,lsx,lsy中的元素一一对应 那这个报错是怎么回事?
Django创建超级用户时,出现错误 ValueError: invalid literal for int() with base 10: ''
ERROR exception 135 Internal Server Error: /users/ Traceback (most recent call last): File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/core/handlers/exception.py", line 41, in inner response = get_response(request) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/core/handlers/base.py", line 187, in _get_response response = self.process_exception_by_middleware(e, request) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/core/handlers/base.py", line 185, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 58, in wrapped_view return view_func(*args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/views/generic/base.py", line 68, in view return self.dispatch(request, *args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/views.py", line 505, in dispatch response = self.handle_exception(exc) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/views.py", line 465, in handle_exception self.raise_uncaught_exception(exc) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/views.py", line 476, in raise_uncaught_exception raise exc File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/views.py", line 502, in dispatch response = handler(request, *args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/generics.py", line 242, in post return self.create(request, *args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/mixins.py", line 19, in create self.perform_create(serializer) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/mixins.py", line 24, in perform_create serializer.save() File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/rest_framework/serializers.py", line 213, in save self.instance = self.create(validated_data) File "/home/python/dihai02/per02/apps/users/serializers/user.py", line 25, in create user = User.objects.create_user(**validated_data) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/contrib/auth/models.py", line 159, in create_user return self._create_user(username, email, password, **extra_fields) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/contrib/auth/models.py", line 153, in _create_user user.save(using=self._db) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/contrib/auth/base_user.py", line 80, in save super(AbstractBaseUser, self).save(*args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/base.py", line 808, in save force_update=force_update, update_fields=update_fields) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/base.py", line 838, in save_base updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/base.py", line 924, in _save_table result = self._do_insert(cls._base_manager, using, fields, update_pk, raw) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/base.py", line 963, in _do_insert using=using, raw=raw) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/query.py", line 1076, in _insert return query.get_compiler(using=using).execute_sql(return_id) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1111, in execute_sql for sql, params in self.as_sql(): File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1064, in as_sql for obj in self.query.objs File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1064, in <listcomp> for obj in self.query.objs File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1063, in <listcomp> [self.prepare_value(field, self.pre_save_val(field, obj)) for field in fields] File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1003, in prepare_value value = field.get_db_prep_save(value, connection=self.connection) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/fields/__init__.py", line 770, in get_db_prep_save prepared=False) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/fields/__init__.py", line 762, in get_db_prep_value value = self.get_prep_value(value) File "/home/python/.virtualenvs/yuan/lib/python3.7/site-packages/django/db/models/fields/__init__.py", line 1853, in get_prep_value return int(value) ValueError: invalid literal for int() with base 10: ''
ValueError: could not broadcast input array from shape (100,100,3) into shape (100,100)
path是图片的路径 w,h是图片的设定长宽 ```def read_img(path): cate=[path+x for x in os.listdir(path) if os.path.isdir(path+x)] imgs=[] labels=[] for idx,folder in enumerate(cate): for im in glob.glob(folder+'/*.jpg'): print('reading the images:%s'%(im)) img=io.imread(im) img=transform.resize(img,(w,h)) imgs.append(img) labels.append(idx) return np.asarray(imgs,np.float32),np.asarray(labels,np.int32) data,label=read_img(path) ``` 我运行花卉图片加载的时候无错误,但换个路径运行猫狗识别的时候就报错 File "C:/Users/spirit/Desktop/实验练习/tensorflow/猫狗识别/训练模型/猫狗识别.py", line 34, in <module>data,label=read_img(path) File "C:/Users/spirit/Desktop/实验练习/tensorflow/猫狗识别/训练模型/猫狗识别.py", line 31, in read_img return np.asarray(imgs,np.float32),np.asarray(labels,np.int32) File "D:\Anaconda\envs\tensorflow\lib\site-packages\numpy\core\numeric.py", line 501, in asarray return array(a, dtype, copy=False, order=order) ValueError: could not broadcast input array from shape (100,100,3) into shape (100,100) 我真心不懂,只是换了其他图片加载,为什么就报错,真心求教! 我在想是不是我的猫狗图片出了问题,但看了也感觉没什么问题啊,头痛
爬虫过程中遇到报错:ValueError: can only parse strings
源代码如下: import requests import json from requests.exceptions import RequestException import time from lxml import etree def get_one_page(url): try: headers = { 'User-Agent': 'Mozilla/5.0(Macintosh;Intel Mac OS X 10_13_3) AppleWebKit/537.36(KHTML,like Gecko) Chorme/65.0.3325.162 Safari/537.36' } response = requests.get(url,headers = headers) if response.status_code == 200: return response.text return None except RequestException: return None def parse_one_page(html): html_coner = etree.HTML(html) pattern = html_coner.xpath('//div[@id="container"]/div[@id="main"/div[@class = "ywnr_box"]//a/text()') return pattern def write_to_file(content): with open('results.txt','a',encoding='utf-8') as f: f.write(json.dumps(content,ensure_ascii=False)+'\n') def main(offset): url = 'http://www.cdpf.org.cn/yw/index_'+str(offset)+'.shtml' html = get_one_page(url) for item in parse_one_page(html): print(item) write_to_file(item) if __name__ == '__main__': for i in range(6): main(offset=i*10) time.sleep(1) 请问各位大佬到底是哪里出了错??
如何解决ValueError: Length mismatch: Expected axis has 20 elements, new values have 19 elements
![图片说明](https://img-ask.csdn.net/upload/201912/07/1575690360_789348.png) 代码如下: import numpy as np import pandas as pd from GM11 import GM11 inputfile = 'D:\\软件\\python\\《Python数据分析与挖掘实战(张良均等)》中文PDF+源代码\\《Python数据分析与挖掘实战(张良均等)》中文PDF+源代码\\数据及代码\\chapter13\\test\\data\\data1.csv' #输入的数据文件 outputfile = 'D:\\软件\\python\\《Python数据分析与挖掘实战(张良均等)》中文PDF+源代码\\《Python数据分析与挖掘实战(张良均等)》中文PDF+源代码\\数据及代码\\chapter13\\test\\data\\data1_GM11.xls' #灰色预测后保存的路径 data = pd.read_csv('D:\\软件\\python\\《Python数据分析与挖掘实战(张良均等)》中文PDF+源代码\\《Python数据分析与挖掘实战(张良均等)》中文PDF+源代码\\数据及代码\\chapter13\\test\\data\\data1.csv',engine='python') #读取数据 data.index = range(1993, 2012) data.loc[2013] = None data.loc[2014] = None l = ['x1', 'x2', 'x3', 'x4', 'x5', 'x7'] for i in l: f = GM11(data[i][arange(1993, 2012)].as_matrix())[0] data[i][2013] = f(len(data)-1) #2013年预测结果 data[i][2014] = f(len(data)) #2014年预测结果 data[i] = data[i].round(2) #保留两位小数 data[l+['y']].to_excel(outputfile) #结果输出 if (C < 0.35 and P > 0.95): # 评测后验差判别 print ('对于模型%s,该模型精度为---好' % i) elif (C < 0.5 and P > 0.8): print ('对于模型%s,该模型精度为---合格' % i) elif (C < 0.65 and P > 0.7): print ('对于模型%s,该模型精度为---勉强合格' % i) else: print ('对于模型%s,该模型精度为---不合格' % i)
关于object detection运行视频检测代码出现报错:ValueError:assignment destination is read-only
我参考博主 withzheng的博客:https://blog.csdn.net/xiaoxiao123jun/article/details/76605928 在视频物体识别的部分中,我用的是Anaconda自带的spyder(python3.6)来运行他给的视频检测代码,出现了如下报错,![图片说明](https://img-ask.csdn.net/upload/201904/20/1555752185_448895.jpg) 具体报错: Moviepy - Building video video1_out.mp4. Moviepy - Writing video video1_out.mp4 t: 7%|▋ | 7/96 [00:40<09:17, 6.26s/it, now=None]Traceback (most recent call last): File "", line 1, in runfile('C:/models-master1/research/object_detection/object_detection_tutorial (1).py', wdir='C:/models-master1/research/object_detection') File "C:\Users\Administrator\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile execfile(filename, namespace) File "C:\Users\Administrator\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile exec(compile(f.read(),filename,'exec'), namespace) File "C:/models-master1/research/object_detection/object_detection_tutorial (1).py", line 273, in white_clip.write_videofile(white_output, audio=False) File "", line 2, in write_videofile File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 54, in requires_duration return f(clip, *a, **k) File "", line 2, in write_videofile File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 137, in use_clip_fps_by_default return f(clip, *new_a, **new_kw) File "", line 2, in write_videofile File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 22, in convert_masks_to_RGB return f(clip, *a, **k) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\video\VideoClip.py", line 326, in write_videofile logger=logger) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\video\io\ffmpeg_writer.py", line 216, in ffmpeg_write_video fps=fps, dtype="uint8"): File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\Clip.py", line 475, in iter_frames frame = self.get_frame(t) File "", line 2, in get_frame File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 89, in wrapper return f(*new_a, **new_kw) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\Clip.py", line 95, in get_frame return self.make_frame(t) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\Clip.py", line 138, in newclip = self.set_make_frame(lambda t: fun(self.get_frame, t)) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\video\VideoClip.py", line 511, in return self.fl(lambda gf, t: image_func(gf(t)), apply_to) File "C:/models-master1/research/object_detection/object_detection_tutorial (1).py", line 267, in process_image image_process=detect_objects(image,sess,detection_graph) File "C:/models-master1/research/object_detection/object_detection_tutorial (1).py", line 258, in detect_objects line_thickness=8) File "C:\models-master1\research\object_detection\utils\visualization_utils.py", line 743, in visualize_boxes_and_labels_on_image_array use_normalized_coordinates=use_normalized_coordinates) File "C:\models-master1\research\object_detection\utils\visualization_utils.py", line 129, in draw_bounding_box_on_image_array np.copyto(image, np.array(image_pil)) ValueError: assignment destination is read-only 想问问各位大神有遇到过类似的问题吗。。如何解决?
在Cent OS中复现已发表文章的 神经网络训练过程,报错ValueError: low >= high
``` Traceback (most recent call last): File "trainIEEE39LoadSheddingAgent.py", line 139, in <module> env.reset() File "/root/RLGC/src/py/PowerDynSimEnvDef_v3.py", line 251, in reset fault_bus_idx = np.random.randint(0, total_fault_buses)# an integer, in the range of [0, total_bus_num-1] File "mtrand.pyx", line 630, in numpy.random.mtrand.RandomState.randint File "bounded_integers.pyx", line 1228, in numpy.random.bounded_integers._rand_int64 ValueError: low >= high ``` 报错如上,为什么会这样报错?如何解决?谢谢!
ValueError: multilabel-indicator format is not supported的报错原因?
报错ValueError: multilabel-indicator format is not supported? 这个报错意思比较明确,不支持多分类,但我模型里y的label定义就是0和1,binary,为啥会有这个报错? 一个图像2分类的keras模型,总样本量=120,其中label"0"=110,label"1"=10,非平衡, 代码如下: data = np.load('D:/a.npz') image_data, label_data= data['image'], data['label'] skf = StratifiedKFold(n_splits=3, shuffle=True) for train, test in skf.split(image_data, label_data): train_x=image_data[train] test_x=image_data[test] train_y=label_data[train] test_y=label_data[test] train_x = train_x.reshape(81,50176) test_x = test_x.reshape(39,50176) train_y = keras.utils.to_categorical(train_y,2) test_y = keras.utils.to_categorical(test_y,2) model = Sequential() model.add(Dense(units=128,activation="relu",input_shape=(50176,))) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=128,activation="relu")) model.add(Dense(units=2,activation="sigmoid")) model.compile(optimizer=SGD(0.001),loss="binary_crossentropy",metrics=["accuracy"]) model.fit(train_x, train_y,batch_size=32,epochs=5,verbose=1) y_pred_model = model.predict_proba(test_x)[:,1] fpr_model, tpr_model, _ = roc_curve(test_y, y_pred_model) 报错提示如下: ---> 63 fpr_model, tpr_model, _ = roc_curve(test_y, y_pred_model) ValueError: multilabel-indicator format is not supported
关于celery启动任务时报错Thread 'ResultHandler' crashed: ValueError('invalid file descriptor 13',)
我使用celery定时器执行任务,可启动时会出现这个错误 ``` [2019-10-22 09:13:30,334: INFO/MainProcess] Connected to redis://127.0.0.1:6379/14 [2019-10-22 09:13:30,361: INFO/MainProcess] mingle: searching for neighbors [2019-10-22 09:13:30,532: INFO/Beat] beat: Starting... [2019-10-22 09:13:31,072: ERROR/Beat] Thread 'ResultHandler' crashed: ValueError('invalid file descriptor 13',) Traceback (most recent call last): File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/pool.py", line 899, in body for _ in self._process_result(1.0): # blocking File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/pool.py", line 864, in _process_result ready, task = poll(timeout) File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/pool.py", line 1370, in _poll_result if self._outqueue._reader.poll(timeout): File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 285, in poll return self._poll(timeout) File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 463, in _poll r = wait([self], timeout) File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 996, in wait return _poll(object_list, timeout) File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 976, in _poll raise ValueError('invalid file descriptor %i' % fd) ValueError: invalid file descriptor 13 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/pool.py", line 504, in run return self.body() File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/pool.py", line 904, in body self.finish_at_shutdown() File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/pool.py", line 953, in finish_at_shutdown if not outqueue._reader.poll(): File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 285, in poll return self._poll(timeout) File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 463, in _poll r = wait([self], timeout) File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 991, in wait return _poll(object_list, 0) File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 976, in _poll raise ValueError('invalid file descriptor %i' % fd) ValueError: invalid file descriptor 13 [2019-10-22 09:13:31,408: INFO/MainProcess] mingle: all alone [2019-10-22 09:13:31,423: INFO/MainProcess] celery@iZwz9h41nalpsqzz57x4tmZ ready. ``` 重启几次这个错误就不会出现,但运行一段时间后再次从定时器发布任务时还会出现该错误导致任务执行失败 ``` [2019-10-22 07:00:00,000: INFO/Beat] Scheduler: Sending due task monitoring_auto_run (run.monitoring_auto_run) [2019-10-22 07:00:00,008: INFO/MainProcess] Received task: run.monitoring_auto_run[469ee195-3e1a-4bf0-a7cb-783232e8d0bc] [2019-10-22 07:00:00,105: ERROR/ForkPoolWorker-11] Thread 'ResultHandler' crashed: ValueError('invalid file descriptor 14',) Traceback (most recent call last): File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/pool.py", line 504, in run return self.body() File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/pool.py", line 899, in body for _ in self._process_result(1.0): # blocking File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/pool.py", line 864, in _process_result ready, task = poll(timeout) File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/pool.py", line 1370, in _poll_result if self._outqueue._reader.poll(timeout): File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 285, in poll return self._poll(timeout) File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 463, in _poll r = wait([self], timeout) File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 996, in wait return _poll(object_list, timeout) File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/connection.py", line 976, in _poll raise ValueError('invalid file descriptor %i' % fd) ValueError: invalid file descriptor 14 [2019-10-22 07:00:00,768: ERROR/MainProcess] Process 'ForkPoolWorker-11' pid:15068 exited with 'exitcode 1' [2019-10-22 07:00:11,186: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: exitcode 1.',) Traceback (most recent call last): File "/pyenvs/spider/lib64/python3.6/site-packages/billiard/pool.py", line 1267, in mark_as_worker_lost human_status(exitcode)), billiard.exceptions.WorkerLostError: Worker exited prematurely: exitcode 1. ```
错误提示ValueError: unsupported format character
应该是这一段 '''将方法体中的host字段进行替换''' def get_raw_body(self, req, ip): ip = self.get_host_from_url(ip) host_reg = re.compile(r'Host:\s([a-z\.A-Z0-9]+)') host = host_reg.findall(req) if not host or host[0] == '': print ('[-]ERROR MESSAGE!Wrong format for request body') sys.exit() req, num = re.subn(host_reg, "Host: %s", req) return req % ip 错误提示: return req % (ip) ValueError: unsupported format character '{' (0x7b) at index 31 源程序是2.7,我的是3.6,不想卸载去下2.7,为了这一个程序不值得...
ValueError: None values not supported.
Traceback (most recent call last): File "document_summarizer_training_testing.py", line 296, in <module> tf.app.run() File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "document_summarizer_training_testing.py", line 291, in main train() File "document_summarizer_training_testing.py", line 102, in train model = MY_Model(sess, len(vocab_dict)-2) File "/home/lyliu/Refresh-master-self-attention/my_model.py", line 70, in __init__ self.train_op_policynet_expreward = model_docsum.train_neg_expectedreward(self.rewardweighted_cross_entropy_loss_multi File "/home/lyliu/Refresh-master-self-attention/model_docsum.py", line 835, in train_neg_expectedreward grads_and_vars_capped_norm = [(tf.clip_by_norm(grad, 5.0), var) for grad, var in grads_and_vars] File "/home/lyliu/Refresh-master-self-attention/model_docsum.py", line 835, in <listcomp> grads_and_vars_capped_norm = [(tf.clip_by_norm(grad, 5.0), var) for grad, var in grads_and_vars] File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/ops/clip_ops.py", line 107,rm t = ops.convert_to_tensor(t, name="t") File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 676o_tensor as_ref=False) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 741convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", onstant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", ake_tensor_proto raise ValueError("None values not supported.") ValueError: None values not supported. 使用tensorflow gpu版本 tensorflow 1.2.0。希望找到解决方法或者出现这个错误的原因
python调用cv2.findContours时报错:ValueError: not enough values to unpack (expected 3, got 2)
完整代码如下: ``` import cv2 import numpy as np img = np.zeros((200, 200), dtype=np.uint8) img[50:150, 50:150] = 255 ret, thresh = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY) image, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) color = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) img = cv2.drawContours(color, contours, -1, (0,255,0), 2) cv2.imshow("contours", color) cv2.waitKey() cv2.destroyAllWindows() ``` 但是cv2.findContours报如下错误: ValueError: not enough values to unpack (expected 3, got 2) python版本为3.6,opencv为4.0.0
linux下安装node.js报错,求大神解决
目前正要部署应用到linux服务器上 在安装node.js时各种报错,首先是python版本问题,后来装了python2.7.5, 在执行./configure时,出现这个错误 >ERROR:root:code for hash md5 was not found. Traceback (most recent call last): File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type %s' % name) ValueError: unsupported hash type md5 ERROR:root:code for hash sha1 was not found. Traceback (most recent call last): File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type %s' % name) ValueError: unsupported hash type sha1 ERROR:root:code for hash sha224 was not found. Traceback (most recent call last): File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type %s' % name) ValueError: unsupported hash type sha224 ERROR:root:code for hash sha256 was not found. Traceback (most recent call last): File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type %s' % name) ValueError: unsupported hash type sha256 ERROR:root:code for hash sha384 was not found. Traceback (most recent call last): File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type %s' % name) ValueError: unsupported hash type sha384 ERROR:root:code for hash sha512 was not found. 朋友说是openssl的版本问题~目前还没解决... 有没有遇到过相似问题的,求指教一下,头快炸了~
python matplotlib 画两条折线图 其中一组数据有空值 如何处理?
python matplotlib 画两条折线图 其中一组数据有空值 如何处理? import csv from datetime import datetime from matplotlib import pyplot as plt filename = 'data.csv' with open(filename) as f: reader = csv.reader(f) header_row = next(reader) dates = [] temp1 = [] temp2 = [] for row in reader: time = row[1] + '-' + row[2] + '-' + row[3] current_date = datetime.strptime(time, "%Y-%m-%d") dates.append(current_date) tempm1 = int(float(row[7])) temp1.append(tempm1) tempm2 = int(row[28]) temp2.append(tempm2) fig = plt.figure(dpi=128, figsize=(10,6)) plt.plot(dates, temp1, c='red') plt.plot(dates, temp2, c='blue') plt.show() 报错: C:\Users\yo\AppData\Local\Programs\Python\Python37\python.exe F:/论文/data/compare/compare.py Traceback (most recent call last): File "F:/论文/data/compare/compare.py", line 26, in <module> tempm2 = int(row[28]) ValueError: invalid literal for int() with base 10: '20.075' Process finished with exit code 1 试过if 不为空值处理,然后数据对不齐了 我想要如果是空值图上就不显示 数据是有小数的所以就用int float 处理 请各位大神帮忙 谢谢!
ValueError: invalid literal for int() with base 10: 'aer'
#coding=utf-8 #Version:python3.6.0 #Tools:Pycharm 2017.3.2 import numpy as np import tensorflow as tf import re TRAIN_PATH="data/ptb.train.txt" EVAL_PATH="data/ptb.valid.txt" TEST_PATH="data/ptb.test.txt" HIDDEN_SIZE=300 NUM_LAYERS=2 VOCAB_SIZE=10000 TRAIN_BATCH_SIZE=20 TRAIN_NUM_STEP=35 EVAL_BATCH_SIZE=1 EVAL_NUM_STEP=1 NUM_EPOCH=5 LSTM_KEEP_PROB=0.9 EMBEDDING_KEEP_PROB=0.9 MAX_GRED_NORM=5 SHARE_EMB_AND_SOFTMAX=True class PTBModel(object): def __init__(self,is_training,batch_size,num_steps): self.batch_size=batch_size self.num_steps=num_steps self.input_data=tf.placeholder(tf.int32,[batch_size,num_steps]) self.targets=tf.placeholder(tf.int32,[batch_size,num_steps]) dropout_keep_prob=LSTM_KEEP_PROB if is_training else 1.0 lstm_cells=[ tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.BasicLSTMCell(HIDDEN_SIZE), output_keep_prob=dropout_keep_prob) for _ in range (NUM_LAYERS)] cell=tf.nn.rnn_cell.MultiRNNCell(lstm_cells) self.initial_state=cell.zero_state(batch_size,tf.float32) embedding=tf.get_variable("embedding",[VOCAB_SIZE,HIDDEN_SIZE]) inputs=tf.nn.embedding_lookup(embedding,self.input_data) if is_training: inputs=tf.nn.dropout(inputs,EMBEDDING_KEEP_PROB) outputs=[] state=self.initial_state with tf.variable_scope("RNN"): for time_step in range(num_steps): if time_step>0:tf.get_variable_scope().reuse_variables() cell_output,state=cell(inputs[:,time_step,:],state) outputs.append(cell_output) # 把输出队列展开成[batch,hidden_size*num_steps]的形状,然后再reshape成[batch*numsteps,hidden_size]的形状 output=tf.reshape(tf.concat(outputs,1),[-1,HIDDEN_SIZE]) if SHARE_EMB_AND_SOFTMAX: weight=tf.transpose(embedding) else: weight=tf.get_variable("weight",[HIDDEN_SIZE,VOCAB_SIZE]) bias=tf.get_variable("bias",[VOCAB_SIZE]) logits=tf.matmul(output,weight)+bias loss=tf.nn.sparse_softmax_cross_entropy_with_logits( labels=tf.reshape(self.targets,[-1]), logits=logits ) self.cost=tf.reduce_sum(loss)/batch_size self.final_state=state # 只在训练模型时定义反向传播操作 if not is_training:return trainable_variables=tf.trainable_variables() #控制梯度大小 grads,_=tf.clip_by_global_norm( tf.gradients(self.cost,trainable_variables),MAX_GRED_NORM) # 定义优化方法 optimizer=tf.train.GradientDescentOptimizer(learning_rate=1.0) # zip() 函数用于将可迭代的对象作为参数,将对象中对应的元素打包成一个个元组,然后返回由这些元组组成的对象,这样做的好处是节约了不少的内存。 #定义训练步骤 self.train_op=optimizer.apply_gradients( zip(grads,trainable_variables)) def run_epoch(session,model,batches,train_op,output_log,step): total_costs=0.0 iters=0 state=session.run(model.initial_state) for x,y in batches: cost,state,_=session.run( [model.cost,model.final_state,train_op], {model.input_data:x,model.targets:y, model.initial_state:state} ) total_costs+=cost iters+=model.num_steps # 只有在训练时输出日志 if output_log and step %100==0: print("After %d steps,perplexity is %.3f"%( step,np.exp(total_costs/iters) )) step +=1 return step,np.exp(total_costs/iters) # 从文件中读取数据,并返回包含单词编号的数组 def read_data(file_path): with open(file_path,"r") as fin: id_string=" ".join([line.strip() for line in fin.readlines()]) id_list=[int(w) for w in id_string.split()] # 将读取的单词编号转为整数 return id_list def make_batches(id_list,batch_size,num_step): # 计算总的batch数量,每个batch包含的单词数量是batch_size*num_step try: num_batches=(len(id_list)-1)/(batch_size*num_step) data=np.array(id_list[:num_batches*batch_size*num_step]) data=np.reshape(data,[batch_size,num_batches*num_step]) data_batches=np.split(data,num_batches,axis=1) label=np.array(id_list[1:num_batches*batch_size*num_step+1]) label=np.reshape(label,[batch_size,num_batches*num_step]) label_batches=np.split(label,num_batches,axis=1) return list(zip(data_batches,label_batches)) def main(): # 定义初始化函数 intializer=tf.random_uniform_initializer(-0.05,0.05) with tf.variable_scope("language_model",reuse=None,initializer=intializer): train_model=PTBModel(True,TRAIN_BATCH_SIZE,TRAIN_NUM_STEP) with tf.variable_scope("language_model",reuse=True,initializer=intializer): eval_model=PTBModel(False,EVAL_BATCH_SIZE,EVAL_NUM_STEP) with tf.Session() as session: tf.global_variables_initializer().run() train_batches=make_batches(read_data(TRAIN_PATH),TRAIN_BATCH_SIZE,TRAIN_NUM_STEP) eval_batches=make_batches(read_data(EVAL_PATH),EVAL_BATCH_SIZE,EVAL_NUM_STEP) test_batches=make_batches(read_data(TEST_PATH),EVAL_BATCH_SIZE,EVAL_NUM_STEP) step=0 for i in range(NUM_EPOCH): print("In iteration:%d" % (i+1)) step,train_pplx=run_epoch(session,train_model,train_batches,train_model.train_op,True,step) print("Epoch:%d Train perplexity:%.3f"%(i+1,train_pplx)) _,eval_pplx=run_epoch(session,eval_model,eval_batches,tf.no_op,False,0) print("Epoch:%d Eval perplexity:%.3f"%(i+1,eval_pplx)) _,test_pplx=run_epoch(session,eval_model,test_batches,tf.no_op(),False,0) print("Test perplexity:%.3f"% test_pplx) if __name__ == '__main__': main()
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时想着能进去就不错了,管他哪个岗呢,就同意了面试...
开源一个功能完整的SpringBoot项目框架
福利来了,给大家带来一个福利。 最近想了解一下有关Spring Boot的开源项目,看了很多开源的框架,大多是一些demo或者是一个未成形的项目,基本功能都不完整,尤其是用户权限和菜单方面几乎没有完整的。 想到我之前做的框架,里面通用模块有:用户模块,权限模块,菜单模块,功能模块也齐全了,每一个功能都是完整的。 打算把这个框架分享出来,供大家使用和学习。 为什么用框架? 框架可以学习整体...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
致 Python 初学者
文章目录1. 前言2. 明确学习目标,不急于求成,不好高骛远3. 在开始学习 Python 之前,你需要做一些准备2.1 Python 的各种发行版2.2 安装 Python2.3 选择一款趁手的开发工具3. 习惯使用IDLE,这是学习python最好的方式4. 严格遵从编码规范5. 代码的运行、调试5. 模块管理5.1 同时安装了py2/py35.2 使用Anaconda,或者通过IDE来安装模
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
羞,Java 字符串拼接竟然有这么多姿势
二哥,我今年大二,看你分享的《阿里巴巴 Java 开发手册》上有一段内容说:“循环体内,拼接字符串最好使用 StringBuilder 的 append 方法,而不是 + 号操作符。”到底为什么啊,我平常一直就用的‘+’号操作符啊!二哥有空的时候能否写一篇文章分析一下呢? 就在昨天,一位叫小菜的读者微信我说了上面这段话。 我当时看到这条微信的第一感觉是:小菜你也太菜了吧,这都不知道为啥啊!我估...
"狗屁不通文章生成器"登顶GitHub热榜,分分钟写出万字形式主义大作
前言 GitHub 被誉为全球最大的同性交友网站,……,陪伴我们已经走过 10+ 年时间,它托管了大量的软件代码,同时也承载了程序员无尽的欢乐。 上周给大家分享了一篇10个让你笑的合不拢嘴的Github项目,而且还拿了7万+个Star哦,有兴趣的朋友,可以看看, 印象最深刻的是 “ 呼吸不止,码字不停 ”: 老实交代,你是不是经常准备写个技术博客,打开word后瞬间灵感便秘,码不出字? 有什么
推荐几款比较实用的工具,网站
1.盘百度PanDownload 这个云盘工具是免费的,可以进行资源搜索,提速(偶尔会抽风????) 不要去某站买付费的???? PanDownload下载地址 2.BeJSON 这是一款拥有各种在线工具的网站,推荐它的主要原因是网站简洁,功能齐全,广告相比其他广告好太多了 bejson网站 3.二维码美化 这个网站的二维码美化很好看,网站界面也很...
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
碎片化的时代,如何学习
今天周末,和大家聊聊学习这件事情。 在如今这个社会,我们的时间被各类 APP 撕的粉碎。 刷知乎、刷微博、刷朋友圈; 看论坛、看博客、看公号; 等等形形色色的信息和知识获取方式一个都不错过。 貌似学了很多,但是却感觉没什么用。 要解决上面这些问题,首先要分清楚一点,什么是信息,什么是知识。 那什么是信息呢? 你一切听到的、看到的,都是信息,比如微博上的明星出轨、微信中的表情大战、抖音上的段子
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。 背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法 不过,当我看了源代码之后 这程序不到50行 尽管我有多年的Python经验,但我竟然一时也没有看懂 当然啦,原作者也说了,这个代码也是在无聊中诞生的,平时撸码是不写中文变量名的, 中文...
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的回答,对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalking,作者吴晟、刘浩杨 等等 仓库地址: apache/skywalking 更...
相关热词 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片 c# 图片颜色调整 最快 c#多张图片上传 c#密封类与密封方法
立即提问