Tensorflow 2.0 : When using data tensors as input to a model, you should specify the `steps_per_epoch` argument.

下面代码每次执行到epochs 中的最后一个step 都会报错,请教大牛这是什么问题呢?

import tensorflow_datasets as tfds
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True,
                         as_supervised=True) 

train_dataset,test_dataset = dataset['train'],dataset['test']

tokenizer = info.features['text'].encoder
print('vocabulary size: ', tokenizer.vocab_size)

sample_string = 'Hello world, tensorflow'
tokenized_string = tokenizer.encode(sample_string)
print('tokened id: ', tokenized_string)

src_string= tokenizer.decode(tokenized_string)
print(src_string)

for t in tokenized_string:
    print(str(t) + ': '+ tokenizer.decode([t]))


        BUFFER_SIZE=6400
BATCH_SIZE=64

num_train_examples = info.splits['train'].num_examples
num_test_examples=info.splits['test'].num_examples

print("Number of training examples: {}".format(num_train_examples))
print("Number of test examples:     {}".format(num_test_examples))

train_dataset=train_dataset.shuffle(BUFFER_SIZE)

train_dataset=train_dataset.padded_batch(BATCH_SIZE,train_dataset.output_shapes)

test_dataset=test_dataset.padded_batch(BATCH_SIZE,test_dataset.output_shapes)

def get_model():
    model=tf.keras.Sequential([
        tf.keras.layers.Embedding(tokenizer.vocab_size,64),
        tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
        tf.keras.layers.Dense(64,activation='relu'),
        tf.keras.layers.Dense(1,activation='sigmoid')
    ])
    return model


model =get_model()
model.compile(loss='binary_crossentropy',
             optimizer='adam',
             metrics=['accuracy'])

                        import math
#from    tensorflow import keras
#train_dataset= keras.preprocessing.sequence.pad_sequences(train_dataset, maxlen=BUFFER_SIZE)

history =model.fit(train_dataset,
                epochs=2,
                steps_per_epoch=(math.ceil(BUFFER_SIZE/BATCH_SIZE) -90 ),
                validation_data= test_dataset)

Train on 10 steps
Epoch 1/2

9/10 [==========================>...] - ETA: 3s - loss: 0.6955 - accuracy: 0.4479

ValueError Traceback (most recent call last)
in
6 epochs=2,
7 steps_per_epoch=(math.ceil(BUFFER_SIZE/BATCH_SIZE) -90 ),
----> 8 validation_data= test_dataset)

/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
726 max_queue_size=max_queue_size,
727 workers=workers,
--> 728 use_multiprocessing=use_multiprocessing)
729
730 def evaluate(self,

/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
672 validation_steps=validation_steps,
673 validation_freq=validation_freq,
--> 674 steps_name='steps_per_epoch')
675
676 def evaluate(self,

/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
437 validation_in_fit=True,
438 prepared_feed_values_from_dataset=(val_iterator is not None),
--> 439 steps_name='validation_steps')
440 if not isinstance(val_results, list):
441 val_results = [val_results]

/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
174 if not is_dataset:
175 num_samples_or_steps = _get_num_samples_or_steps(ins, batch_size,
--> 176 steps_per_epoch)
177 else:
178 num_samples_or_steps = steps_per_epoch

/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in _get_num_samples_or_steps(ins, batch_size, steps_per_epoch)
491 return steps_per_epoch
492 return training_utils.check_num_samples(ins, batch_size, steps_per_epoch,
--> 493 'steps_per_epoch')
494
495

/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_num_samples(ins, batch_size, steps, steps_name)
422 raise ValueError('If ' + steps_name +
423 ' is set, the batch_size must be None.')
--> 424 if check_steps_argument(ins, steps, steps_name):
425 return None
426

/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_steps_argument(input_data, steps, steps_name)
1199 raise ValueError('When using {input_type} as input to a model, you should'
1200 ' specify the {steps_name} argument.'.format(
-> 1201 input_type=input_type_str, steps_name=steps_name))
1202 return True
1203

ValueError: When using data tensors as input to a model, you should specify the steps_per_epoch argument.

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
在训练Tensorflow模型(object_detection)时,训练在第一次评估后退出,怎么使训练继续下去?
当我进行ssd模型训练时,训练进行了10分钟,然后进入评估阶段,评估之后程序就自动退出了,没有看到误和警告,这是为什么,怎么让程序一直训练下去? 训练命令: ``` python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr ``` 配置文件: ``` training exit after the first evaluation(only one evaluation) in Tensorflow model(object_detection) without error and waring System information What is the top-level directory of the model you are using:models/research/object_detection/ Have I written custom code (as opposed to using a stock example script provided in TensorFlow):NO OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Windows-10(64bit) TensorFlow installed from (source or binary):conda install tensorflow-gpu TensorFlow version (use command below):1.13.1 Bazel version (if compiling from source):N/A CUDA/cuDNN version:cudnn-7.6.0 GPU model and memory:GeForce GTX 1060 6GB Exact command to reproduce:See below my command for training : python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr This is my config : train_config { batch_size: 24 data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { ssd_random_crop { } } optimizer { rms_prop_optimizer { learning_rate { exponential_decay_learning_rate { initial_learning_rate: 0.00400000018999 decay_steps: 800720 decay_factor: 0.949999988079 } } momentum_optimizer_value: 0.899999976158 decay: 0.899999976158 epsilon: 1.0 } } fine_tune_checkpoint: "D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt" from_detection_checkpoint: true num_steps: 200000 train_input_reader { label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt" tf_record_input_reader { input_path: "D:/gitcode/models/research/object_detection/idol/train/Iframe_??????.tfrecord" } } eval_config { num_examples: 8000 max_evals: 10 use_moving_averages: false } eval_input_reader { label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt" shuffle: false num_readers: 1 tf_record_input_reader { input_path: "D:/gitcode/models/research/object_detection/idol/eval/Iframe_??????.tfrecord" } ``` 窗口输出: (default) D:\gitcode\models\research>python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md https://github.com/tensorflow/addons If you depend on functionality not listed there, please file an issue. WARNING:tensorflow:Forced number of epochs for all eval validations to be 1. WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered eval_on_train_input_config.num_epochs = 0. Overwriting num_epochs to 1. WARNING:tensorflow:Estimator's model_fn (<function create_model_fn..model_fn at 0x0000027CBAB7BB70>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:86: parallel_interleave (from tensorflow.contrib.data.python.ops.interleave_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.experimental.parallel_interleave(...). WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\core\preprocessor.py:196: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version. Instructions for updating: seed2 arg is deprecated.Use sample_distorted_bounding_box_v2 instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:158: batch_and_drop_remainder (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.Dataset.batch(..., drop_remainder=True). WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\losses\losses_impl.py:448: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\array_grad.py:425: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. 2019-08-14 16:29:31.607841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7845 pciBusID: 0000:04:00.0 totalMemory: 6.00GiB freeMemory: 4.97GiB 2019-08-14 16:29:31.621836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-08-14 16:29:32.275712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-14 16:29:32.283072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-08-14 16:29:32.288675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-08-14 16:29:32.293514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1) WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\eval_util.py:796: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\utils\visualization_utils.py:498: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. Instructions for updating: tf.py_func is deprecated in TF V2. Instead, use tf.py_function, which takes a python function which manipulates tf eager tensors instead of numpy arrays. It's easy to convert a tf eager tensor to an ndarray (just call tensor.numpy()) but having access to eager tensors means tf.py_functions can use accelerators such as GPUs as well as being differentiable using a gradient tape. 2019-08-14 16:41:44.736212: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-08-14 16:41:44.741242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-14 16:41:44.747522: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-08-14 16:41:44.751256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-08-14 16:41:44.755548: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1) WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\training\saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. creating index... index created! creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=2.43s). Accumulating evaluation results... DONE (t=0.14s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.287 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.529 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.278 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.031 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.312 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.162 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.061 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.384 (default) D:\gitcode\models\research>
Tensorflow测试训练styleGAN时报错 No OpKernel was registered to support Op 'NcclAllReduce' with these attrs.
在测试官方StyleGAN。 运行官方与训练模型pretrained_example.py generate_figures.py 没有问题。GPU工作正常。 运行train.py时报错 尝试只用单个GPU训练时没有报错。 NcclAllReduce应该跟多GPU通信有关,不太了解。 InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] 经过多番google 尝试过 重启 conda install keras-gpu 重新安装tensorflow-gpu==1.10.0(跟官方版本保持一致) ``` …… Building TensorFlow graph... Setting up snapshot image grid... Setting up run dir... Training... Traceback (most recent call last): File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1278, in _do_call return fn(*args) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1263, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "train.py", line 191, in <module> main() File "train.py", line 186, in main dnnlib.submit_run(**kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 290, in submit_run run_wrapper(submit_config) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 242, in run_wrapper util.call_func_by_name(func_name=submit_config.run_func_name, submit_config=submit_config, **submit_config.run_func_kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\util.py", line 257, in call_func_by_name return func_obj(*args, **kwargs) File "E:\MachineLearning\stylegan-master\training\training_loop.py", line 230, in training_loop tflib.run([D_train_op, Gs_update_op], {lod_in: sched.lod, lrate_in: sched.D_lrate, minibatch_in: sched.minibatch}) File "E:\MachineLearning\stylegan-master\dnnlib\tflib\tfutil.py", line 26, in run return tf.get_default_session().run(*args, **kwargs) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 877, in run run_metadata_ptr) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1100, in _run feed_dict_tensor, options, run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1272, in _do_run run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1291, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] Caused by op 'TrainD/SumAcrossGPUs/NcclAllReduce', defined at: File "train.py", line 191, in <module> main() File "train.py", line 186, in main dnnlib.submit_run(**kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 290, in submit_run run_wrapper(submit_config) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 242, in run_wrapper util.call_func_by_name(func_name=submit_config.run_func_name, submit_config=submit_config, **submit_config.run_func_kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\util.py", line 257, in call_func_by_name return func_obj(*args, **kwargs) File "E:\MachineLearning\stylegan-master\training\training_loop.py", line 185, in training_loop D_train_op = D_opt.apply_updates() File "E:\MachineLearning\stylegan-master\dnnlib\tflib\optimizer.py", line 135, in apply_updates g = nccl_ops.all_sum(g) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\python\ops\nccl_ops.py", line 49, in all_sum return _apply_all_reduce('sum', tensors) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\python\ops\nccl_ops.py", line 230, in _apply_all_reduce shared_name=shared_name)) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\ops\gen_nccl_ops.py", line 59, in nccl_all_reduce num_devices=num_devices, shared_name=shared_name, name=name) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\util\deprecation.py", line 454, in new_func return func(*args, **kwargs) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\ops.py", line 3156, in create_op op_def=op_def) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\ops.py", line 1718, in __init__ self._traceback = tf_stack.extract_stack() InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] ``` ``` #conda list: # Name Version Build Channel _tflow_select 2.1.0 gpu absl-py 0.8.1 pypi_0 pypi alabaster 0.7.12 py36_0 asn1crypto 1.2.0 py36_0 astor 0.8.0 pypi_0 pypi astroid 2.3.2 py36_0 attrs 19.3.0 py_0 babel 2.7.0 py_0 backcall 0.1.0 py36_0 blas 1.0 mkl bleach 3.1.0 py36_0 ca-certificates 2019.10.16 0 certifi 2019.9.11 py36_0 cffi 1.13.1 py36h7a1dbc1_0 chardet 3.0.4 py36_1003 cloudpickle 1.2.2 py_0 colorama 0.4.1 py36_0 cryptography 2.8 py36h7a1dbc1_0 cudatoolkit 9.0 1 cudnn 7.6.4 cuda9.0_0 decorator 4.4.1 py_0 defusedxml 0.6.0 py_0 django 2.2.7 pypi_0 pypi docutils 0.15.2 py36_0 entrypoints 0.3 py36_0 gast 0.3.2 py_0 grpcio 1.25.0 pypi_0 pypi h5py 2.9.0 py36h5e291fa_0 hdf5 1.10.4 h7ebc959_0 icc_rt 2019.0.0 h0cc432a_1 icu 58.2 ha66f8fd_1 idna 2.8 pypi_0 pypi image 1.5.27 pypi_0 pypi imagesize 1.1.0 py36_0 importlib_metadata 0.23 py36_0 intel-openmp 2019.4 245 ipykernel 5.1.3 py36h39e3cac_0 ipython 7.9.0 py36h39e3cac_0 ipython_genutils 0.2.0 py36h3c5d0ee_0 isort 4.3.21 py36_0 jedi 0.15.1 py36_0 jinja2 2.10.3 py_0 jpeg 9b hb83a4c4_2 jsonschema 3.1.1 py36_0 jupyter_client 5.3.4 py36_0 jupyter_core 4.6.1 py36_0 keras-applications 1.0.8 py_0 keras-base 2.2.4 py36_0 keras-gpu 2.2.4 0 keras-preprocessing 1.1.0 py_1 keyring 18.0.0 py36_0 lazy-object-proxy 1.4.3 py36he774522_0 libpng 1.6.37 h2a8f88b_0 libprotobuf 3.9.2 h7bd577a_0 libsodium 1.0.16 h9d3ae62_0 markdown 3.1.1 py36_0 markupsafe 1.1.1 py36he774522_0 mccabe 0.6.1 py36_1 mistune 0.8.4 py36he774522_0 mkl 2019.4 245 mkl-service 2.3.0 py36hb782905_0 mkl_fft 1.0.15 py36h14836fe_0 mkl_random 1.1.0 py36h675688f_0 more-itertools 7.2.0 py36_0 nbconvert 5.6.1 py36_0 nbformat 4.4.0 py36h3a5bc1b_0 numpy 1.17.3 py36h4ceb530_0 numpy-base 1.17.3 py36hc3f5095_0 numpydoc 0.9.1 py_0 openssl 1.1.1d he774522_3 packaging 19.2 py_0 pandoc 2.2.3.2 0 pandocfilters 1.4.2 py36_1 parso 0.5.1 py_0 pickleshare 0.7.5 py36_0 pillow 6.2.1 pypi_0 pypi pip 19.3.1 py36_0 prompt_toolkit 2.0.10 py_0 protobuf 3.10.0 pypi_0 pypi psutil 5.6.3 py36he774522_0 pycodestyle 2.5.0 py36_0 pycparser 2.19 py36_0 pyflakes 2.1.1 py36_0 pygments 2.4.2 py_0 pylint 2.4.3 py36_0 pyopenssl 19.0.0 py36_0 pyparsing 2.4.2 py_0 pyqt 5.9.2 py36h6538335_2 pyreadline 2.1 py36_1 pyrsistent 0.15.4 py36he774522_0 pysocks 1.7.1 py36_0 python 3.6.9 h5500b2f_0 python-dateutil 2.8.1 py_0 pytz 2019.3 py_0 pywin32 223 py36hfa6e2cd_1 pyyaml 5.1.2 py36he774522_0 pyzmq 18.1.0 py36ha925a31_0 qt 5.9.7 vc14h73c81de_0 qtawesome 0.6.0 py_0 qtconsole 4.5.5 py_0 qtpy 1.9.0 py_0 requests 2.22.0 py36_0 rope 0.14.0 py_0 scipy 1.3.1 py36h29ff71c_0 setuptools 39.1.0 pypi_0 pypi sip 4.19.8 py36h6538335_0 six 1.13.0 pypi_0 pypi snowballstemmer 2.0.0 py_0 sphinx 2.2.1 py_0 sphinxcontrib-applehelp 1.0.1 py_0 sphinxcontrib-devhelp 1.0.1 py_0 sphinxcontrib-htmlhelp 1.0.2 py_0 sphinxcontrib-jsmath 1.0.1 py_0 sphinxcontrib-qthelp 1.0.2 py_0 sphinxcontrib-serializinghtml 1.1.3 py_0 spyder 3.3.6 py36_0 spyder-kernels 0.5.2 py36_0 sqlite 3.30.1 he774522_0 sqlparse 0.3.0 pypi_0 pypi tensorboard 1.10.0 py36he025d50_0 tensorflow 1.10.0 gpu_py36h3514669_0 tensorflow-base 1.10.0 gpu_py36h6e53903_0 tensorflow-gpu 1.10.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi testpath 0.4.2 py36_0 tornado 6.0.3 py36he774522_0 traitlets 4.3.3 py36_0 typed-ast 1.4.0 py36he774522_0 urllib3 1.25.6 pypi_0 pypi vc 14.1 h0510ff6_4 vs2015_runtime 14.16.27012 hf0eaf9b_0 wcwidth 0.1.7 py36h3d5aa90_0 webencodings 0.5.1 py36_1 werkzeug 0.16.0 py_0 wheel 0.33.6 py36_0 win_inet_pton 1.1.0 py36_0 wincertstore 0.2 py36h7fe50ca_0 wrapt 1.11.2 py36he774522_0 yaml 0.1.7 hc54c509_2 zeromq 4.3.1 h33f27b4_3 zipp 0.6.0 py_0 zlib 1.2.11 h62dcd97_3 ``` 2*RTX2080Ti driver 4.19.67
keras报错:All inputs to the layer should be tensors.
深度学习小白,初次使用keras构建网络,遇到问题向各位大神请教: ``` from keras.models import Sequential from keras.layers import Embedding from keras.layers import Dense, Activation from keras.layers import Concatenate from keras.layers import Add 构建了一些嵌入层_ model_store = Embedding(1115, 10) model_dow = Embedding(7, 6) model_day = Embedding(31, 10) model_month = Embedding(12, 6) model_year = Embedding(3, 2) model_promotion = Embedding(2, 1) model_state = Embedding(12, 6) 将这些嵌入层连接起来 output_embeddings = [model_store, model_dow, model_day, model_month, model_year, model_promotion, model_state] output_model = Concatenate()(output_embeddings) ``` 运行报错: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) D:\python\lib\site-packages\keras\engine\base_layer.py in assert_input_compatibility(self, inputs) 278 try: --> 279 K.is_keras_tensor(x) 280 except ValueError: D:\python\lib\site-packages\keras\backend\tensorflow_backend.py in is_keras_tensor(x) 473 raise ValueError('Unexpectedly found an instance of type `' + --> 474 str(type(x)) + '`. ' 475 'Expected a symbolic tensor instance.') ValueError: Unexpectedly found an instance of type `<class 'keras.layers.embeddings.Embedding'>`. Expected a symbolic tensor instance. During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-32-8e957c4150f0> in <module> ----> 1 output_model = Concatenate()(output_embeddings) D:\python\lib\site-packages\keras\engine\base_layer.py in __call__(self, inputs, **kwargs) 412 # Raise exceptions in case the input is not compatible 413 # with the input_spec specified in the layer constructor. --> 414 self.assert_input_compatibility(inputs) 415 416 # Collect input shapes to build layer. D:\python\lib\site-packages\keras\engine\base_layer.py in assert_input_compatibility(self, inputs) 283 'Received type: ' + 284 str(type(x)) + '. Full input: ' + --> 285 str(inputs) + '. All inputs to the layer ' 286 'should be tensors.') 287 ValueError: Layer concatenate_5 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.embeddings.Embedding'>. Full input: [<keras.layers.embeddings.Embedding object at 0x000001C82EA1EC88>, <keras.layers.embeddings.Embedding object at 0x000001C82EA1EB38>, <keras.layers.embeddings.Embedding object at 0x000001C82EA1EB00>, <keras.layers.embeddings.Embedding object at 0x000001C82E954240>, <keras.layers.embeddings.Embedding object at 0x000001C82E954198>, <keras.layers.embeddings.Embedding object at 0x000001C82E9542E8>, <keras.layers.embeddings.Embedding object at 0x000001C82E954160>]. All inputs to the layer should be tensors. 报错提示是:所有层的输入应该为张量,请问应该怎么修改呢?麻烦了!
用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神
用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神 执行语句是: ``` python model_main.py --model_dir=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training --pipeline_config_path=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training/mask_rcnn_inception_v2_coco.config ``` 报错信息如下: ``` WARNING:tensorflow:Forced number of epochs for all eval validations to be 1. WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1. WARNING:tensorflow:Estimator's model_fn (<function create_model_fn.<locals>.model_fn at 0x000001C1EA335C80>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards. Traceback (most recent call last): File "model_main.py", line 109, in <module> tf.app.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run _sys.exit(main(argv)) File "model_main.py", line 105, in main tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0]) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 439, in train_and_evaluate executor.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 518, in run self.run_local() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 650, in run_local hooks=train_hooks) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 363, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 843, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 853, in _train_model_default input_fn, model_fn_lib.ModeKeys.TRAIN)) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 691, in _get_features_and_labels_from_input_fn result = self._call_input_fn(input_fn, mode) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 798, in _call_input_fn return input_fn(**kwargs) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 525, in _train_input_fn batch_size=params['batch_size'] if params else train_config.batch_size) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 149, in build dataset = data_map_fn(process_fn, num_parallel_calls=num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 853, in map return ParallelMapDataset(self, map_func, num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1870, in __init__ super(ParallelMapDataset, self).__init__(input_dataset, map_func) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1839, in __init__ self._map_func.add_to_graph(ops.get_default_graph()) File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 484, in add_to_graph self._create_definition_if_needed() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 319, in _create_definition_if_needed self._create_definition_if_needed_impl() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 336, in _create_definition_if_needed_impl outputs = self._func(*inputs) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1804, in tf_map_func ret = map_func(nested_args) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 130, in process_fn processed_tensors = transform_input_data_fn(processed_tensors) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 515, in transform_and_pad_input_data_fn tensor_dict=transform_data_fn(tensor_dict), File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 129, in transform_input_data tf.expand_dims(tf.to_float(image), axis=0)) File "D:\Tensorflow\tf\models\research\object_detection\meta_architectures\faster_rcnn_meta_arch.py", line 543, in preprocess parallel_iterations=self._parallel_iterations) File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in static_or_dynamic_map_fn outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in <listcomp> outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2264, in resize_to_range lambda: _resize_portrait_image(image)) File "E:\Python3.6\lib\site-packages\tensorflow\python\util\deprecation.py", line 432, in new_func return func(*args, **kwargs) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2063, in cond orig_res_t, res_t = context_t.BuildCondBranch(true_fn) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 1913, in BuildCondBranch original_result = fn() File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2263, in <lambda> lambda: _resize_landscape_image(image), File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2245, in _resize_landscape_image align_corners=align_corners, preserve_aspect_ratio=True) TypeError: resize_images() got an unexpected keyword argument 'preserve_aspect_ratio' ``` 根据提示的最后一句,是说没有一个有效参数 我用的是TensorFlow1.8 python3.6,下载的最新的TensorFlow-models-master
tensorflow的InvalidArgumentError报错问题(placeholder)
书上的一个实例,不知道为什么报错。请各位大神帮忙解答一下。 程序如下: from tensorflow.examples.tutorials.mnist import input_data mnist=input_data.read_data_sets("MNIST_data/",one_hot=True) print(mnist.train.images.shape,mnist.train.labels.shape) print(mnist.test.images.shape,mnist.test.labels.shape) print(mnist.validation.images.shape,mnist.validation.labels.shape) import tensorflow as tf sess=tf.InteractiveSession() x=tf.placeholder(tf.float32,[None,784]) W=tf.Variable(tf.zeros([784,10])) b=tf.Variable(tf.zeros([10])) y=tf.nn.softmax(tf.matmul(x,W)+b) y_=tf.placeholder(tf.float32,[None,10]) cross_entropy=tf.reduce_mean(-tf.reduce_sum(y_* tf.log(y),reduction_indices=[1])) train_step=tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) tf.global_variables_initializer().run() for i in range(1000): batch_xs,batch_ys=mnist.train.next_batch(100) train_step.run({x:batch_xs,y_:batch_ys}) correct_prediction=tf.equal(tf.argmax(y,1),tf.argmax(y_,1)) accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) print(accuracy.eval({x:mnist.test.images,y:mnist.test.labels})) 报错如下: runfile('D:/project/Spyder/MNIST_data.py', wdir='D:/project/Spyder') Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz (55000, 784) (55000, 10) (10000, 784) (10000, 10) (5000, 784) (5000, 10) D:\soft\Anaconda3\lib\site-packages\tensorflow\python\client\session.py:1645: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s). warnings.warn('An interactive session is already active. This can ' Traceback (most recent call last): File "<ipython-input-2-3b25b2404fa0>", line 1, in <module> runfile('D:/project/Spyder/MNIST_data.py', wdir='D:/project/Spyder') File "D:\soft\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile execfile(filename, namespace) File "D:\soft\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "D:/project/Spyder/MNIST_data.py", line 29, in <module> print(accuracy.eval({x:mnist.test.images,y:mnist.test.labels})) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 680, in eval return _eval_using_default_session(self, feed_dict, self.graph, session) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 4951, in _eval_using_default_session return session.run(tensors, feed_dict) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 877, in run run_metadata_ptr) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1100, in _run feed_dict_tensor, options, run_metadata) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1272, in _do_run run_metadata) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1291, in _do_call raise type(e)(node_def, op, message) InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_3' with dtype float and shape [?,10] [[Node: Placeholder_3 = Placeholder[dtype=DT_FLOAT, shape=[?,10], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]] [[Node: Mean_3/_29 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_18_Mean_3", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] Caused by op 'Placeholder_3', defined at: File "D:\soft\Anaconda3\lib\site-packages\spyder\utils\ipython\start_kernel.py", line 268, in <module> main() File "D:\soft\Anaconda3\lib\site-packages\spyder\utils\ipython\start_kernel.py", line 264, in main kernel.start() File "D:\soft\Anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 478, in start self.io_loop.start() File "D:\soft\Anaconda3\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "D:\soft\Anaconda3\lib\site-packages\tornado\ioloop.py", line 888, in start handler_func(fd_obj, events) File "D:\soft\Anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "D:\soft\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events self._handle_recv() File "D:\soft\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "D:\soft\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "D:\soft\Anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "D:\soft\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "D:\soft\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 233, in dispatch_shell handler(stream, idents, msg) File "D:\soft\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "D:\soft\Anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 208, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "D:\soft\Anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 537, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "D:\soft\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2728, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "D:\soft\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2856, in run_ast_nodes if self.run_code(code, result): File "D:\soft\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2910, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-3b25b2404fa0>", line 1, in <module> runfile('D:/project/Spyder/MNIST_data.py', wdir='D:/project/Spyder') File "D:\soft\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile execfile(filename, namespace) File "D:\soft\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "D:/project/Spyder/MNIST_data.py", line 20, in <module> y_=tf.placeholder(tf.float32,[None,10]) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1735, in placeholder return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 5928, in placeholder "Placeholder", dtype=dtype, shape=shape, name=name) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 454, in new_func return func(*args, **kwargs) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3155, in create_op op_def=op_def) File "D:\soft\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1717, in __init__ self._traceback = tf_stack.extract_stack() InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_3' with dtype float and shape [?,10] [[Node: Placeholder_3 = Placeholder[dtype=DT_FLOAT, shape=[?,10], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]] [[Node: Mean_3/_29 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_18_Mean_3", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
求助:CUDA的RuntimeError:cuda runtime error (30)
``` from torchvision import models use_gpu = torch.cuda.is_available() model_ft = models.resnet18(pretrained = True) if use_gpu: model_ft = model_ft.cuda() ``` 运行后报错: ``` RuntimeError Traceback (most recent call last) <ipython-input-6-87942ec8aa13> in <module>() 11 model_ft.fc = nn.Linear(num_ftrs , 5)#############################2 12 if use_gpu: ---> 13 model_ft = model_ft.cuda() 14 #criterion = nn.MultiMarginLoss() 15 criterion = nn.CrossEntropyLoss() ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in cuda(self, device) 258 Module: self 259 """ --> 260 return self._apply(lambda t: t.cuda(device)) 261 262 def cpu(self): ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn) 185 def _apply(self, fn): 186 for module in self.children(): --> 187 module._apply(fn) 188 189 for param in self._parameters.values(): ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn) 191 # Tensors stored in modules are graph leaves, and we don't 192 # want to create copy nodes, so we have to unpack the data. --> 193 param.data = fn(param.data) 194 if param._grad is not None: 195 param._grad.data = fn(param._grad.data) ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in <lambda>(t) 258 Module: self 259 """ --> 260 return self._apply(lambda t: t.cuda(device)) 261 262 def cpu(self): ~\Anaconda3\lib\site-packages\torch\cuda\__init__.py in _lazy_init() 160 "Cannot re-initialize CUDA in forked subprocess. " + msg) 161 _check_driver() --> 162 torch._C._cuda_init() 163 _cudart = _load_cudart() 164 _cudart.cudaGetErrorName.restype = ctypes.c_char_p RuntimeError: cuda runtime error (30) : unknown error at ..\aten\src\THC\THCGeneral.cpp:87 ``` Win10+Anaconda5.0.1+python3.6.3 Cuda driver version 10.1 Cuda Toolkit 10.1 pytorch 1.0.1 显卡GeForce GTX 1050 Ti nvidia驱动版本430.39 求助啊!!!
tensorflow中我的网络输出层有4个节点,但最终结果只需要用到第4个节点的输出,现在有如下报错,如何解决?
输出层代码 Weights1 = tf.Variable(tf.random_normal([11, 4])) biases1 = tf.Variable(tf.zeros([1, 4]) + 0.1) Wx_plus_b1 = tf.matmul(l0, Weights1) + biases1 N1act = 2/(1+pow(math.e,-Wx_plus_b1[3]))-1 prediction = tf_spiky(N1act) 报错信息 raise ValueError(err.message) ValueError: slice index 3 of dimension 0 out of bounds. for 'strided_slice' (op: 'StridedSlice') with input shapes: [1,4], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
如何对使用ssd检测出来的目标进行计数
![图片说明](https://img-ask.csdn.net/upload/201902/19/1550586482_193932.png) ![图片说明](https://img-ask.csdn.net/upload/201902/19/1550586497_236815.png) 我使用了ssd对图像进行检测,检测结果如图所示,请问如何对每检测结果中的每一个对象计数。如果对视频进行物体检测的计数,需要往哪个方向进行。有好的博客可以推荐一下。谢谢。 源代码如下 ``` # coding: utf-8 # # Object Detection Demo # Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the [installation instructions](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md) before you start. # # Imports # In[1]: import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from distutils.version import StrictVersion from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") from object_detection.utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') # ## Env setup # In[2]: # This is needed to display the images. get_ipython().magic('matplotlib inline') # ## Object detection imports # Here are the imports from the object detection module. # In[3]: from utils import label_map_util from utils import visualization_utils as vis_util # # Model preparation # ## Variables # # Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file. # # By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies. # In[4]: # What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') # ## Download Model # In[5]: opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd()) # ## Load a (frozen) Tensorflow model into memory. # In[6]: detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') # ## Loading label map # Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine # In[7]: category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) # ## Helper code # In[8]: def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) # # Detection # In[9]: # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = 'test_images' TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ] # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) # In[ ]: def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict # In[ ]: for image_path in TEST_IMAGE_PATHS: image = Image.open(image_path) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(image) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) ```
吴恩达深度学习第四课第四周fr_utils.py报错,有人遇到过吗
Face Recognition/fr_utils.py, Line21中_get_session()和Line140中model无法找到引用,请问这是什么原因 加载模型时候会报如下错误: Using TensorFlow backend. 2018-08-26 21:30:53.046324: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 Total Params: 3743280 Traceback (most recent call last): File "C:/Users/51530/PycharmProjects/DL/wuenda/Face/faceV3.py", line 60, in <module> load_weights_from_FaceNet(FRmodel) File "C:\Users\51530\PycharmProjects\DL\wuenda\Face\fr_utils.py", line 133, in load_weights_from_FaceNet weights_dict = load_weights() File "C:\Users\51530\PycharmProjects\DL\wuenda\Face\fr_utils.py", line 154, in load_weights conv_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) File "E:\anaconda\lib\site-packages\numpy\lib\npyio.py", line 1867, in genfromtxt raise ValueError(errmsg) ValueError: Some errors were detected ! Line #7 (got 2 columns instead of 1) Line #12 (got 3 columns instead of 1) Line #15 (got 2 columns instead of 1) 具体此文件: ``` #### PART OF THIS CODE IS USING CODE FROM VICTOR SY WANG: https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/utils.py #### import tensorflow as tf import numpy as np import os import cv2 from numpy import genfromtxt from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate from keras.models import Model from keras.layers.normalization import BatchNormalization from keras.layers.pooling import MaxPooling2D, AveragePooling2D import h5py import matplotlib.pyplot as plt _FLOATX = 'float32' def variable(value, dtype=_FLOATX, name=None): v = tf.Variable(np.asarray(value, dtype=dtype), name=name) _get_session().run(v.initializer) return v def shape(x): return x.get_shape() def square(x): return tf.square(x) def zeros(shape, dtype=_FLOATX, name=None): return variable(np.zeros(shape), dtype, name) def concatenate(tensors, axis=-1): if axis < 0: axis = axis % len(tensors[0].get_shape()) return tf.concat(axis, tensors) def LRN2D(x): return tf.nn.lrn(x, alpha=1e-4, beta=0.75) def conv2d_bn(x, layer=None, cv1_out=None, cv1_filter=(1, 1), cv1_strides=(1, 1), cv2_out=None, cv2_filter=(3, 3), cv2_strides=(1, 1), padding=None): num = '' if cv2_out == None else '1' tensor = Conv2D(cv1_out, cv1_filter, strides=cv1_strides, data_format='channels_first', name=layer+'_conv'+num)(x) tensor = BatchNormalization(axis=1, epsilon=0.00001, name=layer+'_bn'+num)(tensor) tensor = Activation('relu')(tensor) if padding == None: return tensor tensor = ZeroPadding2D(padding=padding, data_format='channels_first')(tensor) if cv2_out == None: return tensor tensor = Conv2D(cv2_out, cv2_filter, strides=cv2_strides, data_format='channels_first', name=layer+'_conv'+'2')(tensor) tensor = BatchNormalization(axis=1, epsilon=0.00001, name=layer+'_bn'+'2')(tensor) tensor = Activation('relu')(tensor) return tensor WEIGHTS = [ 'conv1', 'bn1', 'conv2', 'bn2', 'conv3', 'bn3', 'inception_3a_1x1_conv', 'inception_3a_1x1_bn', 'inception_3a_pool_conv', 'inception_3a_pool_bn', 'inception_3a_5x5_conv1', 'inception_3a_5x5_conv2', 'inception_3a_5x5_bn1', 'inception_3a_5x5_bn2', 'inception_3a_3x3_conv1', 'inception_3a_3x3_conv2', 'inception_3a_3x3_bn1', 'inception_3a_3x3_bn2', 'inception_3b_3x3_conv1', 'inception_3b_3x3_conv2', 'inception_3b_3x3_bn1', 'inception_3b_3x3_bn2', 'inception_3b_5x5_conv1', 'inception_3b_5x5_conv2', 'inception_3b_5x5_bn1', 'inception_3b_5x5_bn2', 'inception_3b_pool_conv', 'inception_3b_pool_bn', 'inception_3b_1x1_conv', 'inception_3b_1x1_bn', 'inception_3c_3x3_conv1', 'inception_3c_3x3_conv2', 'inception_3c_3x3_bn1', 'inception_3c_3x3_bn2', 'inception_3c_5x5_conv1', 'inception_3c_5x5_conv2', 'inception_3c_5x5_bn1', 'inception_3c_5x5_bn2', 'inception_4a_3x3_conv1', 'inception_4a_3x3_conv2', 'inception_4a_3x3_bn1', 'inception_4a_3x3_bn2', 'inception_4a_5x5_conv1', 'inception_4a_5x5_conv2', 'inception_4a_5x5_bn1', 'inception_4a_5x5_bn2', 'inception_4a_pool_conv', 'inception_4a_pool_bn', 'inception_4a_1x1_conv', 'inception_4a_1x1_bn', 'inception_4e_3x3_conv1', 'inception_4e_3x3_conv2', 'inception_4e_3x3_bn1', 'inception_4e_3x3_bn2', 'inception_4e_5x5_conv1', 'inception_4e_5x5_conv2', 'inception_4e_5x5_bn1', 'inception_4e_5x5_bn2', 'inception_5a_3x3_conv1', 'inception_5a_3x3_conv2', 'inception_5a_3x3_bn1', 'inception_5a_3x3_bn2', 'inception_5a_pool_conv', 'inception_5a_pool_bn', 'inception_5a_1x1_conv', 'inception_5a_1x1_bn', 'inception_5b_3x3_conv1', 'inception_5b_3x3_conv2', 'inception_5b_3x3_bn1', 'inception_5b_3x3_bn2', 'inception_5b_pool_conv', 'inception_5b_pool_bn', 'inception_5b_1x1_conv', 'inception_5b_1x1_bn', 'dense_layer' ] conv_shape = { 'conv1': [64, 3, 7, 7], 'conv2': [64, 64, 1, 1], 'conv3': [192, 64, 3, 3], 'inception_3a_1x1_conv': [64, 192, 1, 1], 'inception_3a_pool_conv': [32, 192, 1, 1], 'inception_3a_5x5_conv1': [16, 192, 1, 1], 'inception_3a_5x5_conv2': [32, 16, 5, 5], 'inception_3a_3x3_conv1': [96, 192, 1, 1], 'inception_3a_3x3_conv2': [128, 96, 3, 3], 'inception_3b_3x3_conv1': [96, 256, 1, 1], 'inception_3b_3x3_conv2': [128, 96, 3, 3], 'inception_3b_5x5_conv1': [32, 256, 1, 1], 'inception_3b_5x5_conv2': [64, 32, 5, 5], 'inception_3b_pool_conv': [64, 256, 1, 1], 'inception_3b_1x1_conv': [64, 256, 1, 1], 'inception_3c_3x3_conv1': [128, 320, 1, 1], 'inception_3c_3x3_conv2': [256, 128, 3, 3], 'inception_3c_5x5_conv1': [32, 320, 1, 1], 'inception_3c_5x5_conv2': [64, 32, 5, 5], 'inception_4a_3x3_conv1': [96, 640, 1, 1], 'inception_4a_3x3_conv2': [192, 96, 3, 3], 'inception_4a_5x5_conv1': [32, 640, 1, 1,], 'inception_4a_5x5_conv2': [64, 32, 5, 5], 'inception_4a_pool_conv': [128, 640, 1, 1], 'inception_4a_1x1_conv': [256, 640, 1, 1], 'inception_4e_3x3_conv1': [160, 640, 1, 1], 'inception_4e_3x3_conv2': [256, 160, 3, 3], 'inception_4e_5x5_conv1': [64, 640, 1, 1], 'inception_4e_5x5_conv2': [128, 64, 5, 5], 'inception_5a_3x3_conv1': [96, 1024, 1, 1], 'inception_5a_3x3_conv2': [384, 96, 3, 3], 'inception_5a_pool_conv': [96, 1024, 1, 1], 'inception_5a_1x1_conv': [256, 1024, 1, 1], 'inception_5b_3x3_conv1': [96, 736, 1, 1], 'inception_5b_3x3_conv2': [384, 96, 3, 3], 'inception_5b_pool_conv': [96, 736, 1, 1], 'inception_5b_1x1_conv': [256, 736, 1, 1], } def load_weights_from_FaceNet(FRmodel): # Load weights from csv files (which was exported from Openface torch model) weights = WEIGHTS weights_dict = load_weights() # Set layer weights of the model for name in weights: if FRmodel.get_layer(name) != None: FRmodel.get_layer(name).set_weights(weights_dict[name]) elif model.get_layer(name) != None: model.get_layer(name).set_weights(weights_dict[name]) def load_weights(): # Set weights path dirPath = './weights' fileNames = filter(lambda f: not f.startswith('.'), os.listdir(dirPath)) paths = {} weights_dict = {} for n in fileNames: paths[n.replace('.csv', '')] = dirPath + '/' + n for name in WEIGHTS: if 'conv' in name: conv_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) conv_w = np.reshape(conv_w, conv_shape[name]) conv_w = np.transpose(conv_w, (2, 3, 1, 0)) conv_b = genfromtxt(paths[name + '_b'], delimiter=',', dtype=None) weights_dict[name] = [conv_w, conv_b] elif 'bn' in name: bn_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) bn_b = genfromtxt(paths[name + '_b'], delimiter=',', dtype=None) bn_m = genfromtxt(paths[name + '_m'], delimiter=',', dtype=None) bn_v = genfromtxt(paths[name + '_v'], delimiter=',', dtype=None) weights_dict[name] = [bn_w, bn_b, bn_m, bn_v] elif 'dense' in name: dense_w = genfromtxt(dirPath+'/dense_w.csv', delimiter=',', dtype=None) dense_w = np.reshape(dense_w, (128, 736)) dense_w = np.transpose(dense_w, (1, 0)) dense_b = genfromtxt(dirPath+'/dense_b.csv', delimiter=',', dtype=None) weights_dict[name] = [dense_w, dense_b] return weights_dict def load_dataset(): train_dataset = h5py.File('datasets/train_happy.h5', "r") train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels test_dataset = h5py.File('datasets/test_happy.h5', "r") test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels classes = np.array(test_dataset["list_classes"][:]) # the list of classes train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0])) test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0])) return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes def img_to_encoding(image_path, model): img1 = cv2.imread(image_path, 1) img = img1[...,::-1] img = np.around(np.transpose(img, (2,0,1))/255.0, decimals=12) x_train = np.array([img]) embedding = model.predict_on_batch(x_train) return embedding ```
pyqt5中如何通过OpenCV读取一帧图像喂入网络呢?
我想通过pyqt5制作一个UI界面封装google object detection api的示例代码,源代码中是识别单张图片,我想通过摄像头输入一帧的图像然后进行识别显示。整个程序如下: ``` # coding:utf-8 ''' V3.0A版本,尝试实现摄像头识别 ''' import numpy as np import cv2 import os import os.path import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile import pylab from distutils.version import StrictVersion from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.QtWidgets import * from PyQt5.QtCore import * from PyQt5.QtGui import * class UiForm(): openfile_name_pb = '' openfile_name_pbtxt = '' openpic_name = '' num_class = 0 def setupUi(self, Form): Form.setObjectName("Form") Form.resize(600, 690) Form.setMinimumSize(QtCore.QSize(600, 690)) Form.setMaximumSize(QtCore.QSize(600, 690)) self.frame = QtWidgets.QFrame(Form) self.frame.setGeometry(QtCore.QRect(20, 20, 550, 100)) self.frame.setFrameShape(QtWidgets.QFrame.StyledPanel) self.frame.setFrameShadow(QtWidgets.QFrame.Raised) self.frame.setObjectName("frame") self.horizontalLayout_2 = QtWidgets.QHBoxLayout(self.frame) self.horizontalLayout_2.setObjectName("horizontalLayout_2") # 加载模型文件按钮 self.btn_add_file = QtWidgets.QPushButton(self.frame) self.btn_add_file.setObjectName("btn_add_file") self.horizontalLayout_2.addWidget(self.btn_add_file) # 加载pbtxt文件按钮 self.btn_add_pbtxt = QtWidgets.QPushButton(self.frame) self.btn_add_pbtxt.setObjectName("btn_add_pbtxt") self.horizontalLayout_2.addWidget(self.btn_add_pbtxt) # 输入检测类别数目按钮 self.btn_enter = QtWidgets.QPushButton(self.frame) self.btn_enter.setObjectName("btn_enter") self.horizontalLayout_2.addWidget(self.btn_enter) # 打开摄像头 self.btn_opencam = QtWidgets.QPushButton(self.frame) self.btn_opencam.setObjectName("btn_objdec") self.horizontalLayout_2.addWidget(self.btn_opencam) # 开始识别按钮 self.btn_objdec = QtWidgets.QPushButton(self.frame) self.btn_objdec.setObjectName("btn_objdec") self.horizontalLayout_2.addWidget(self.btn_objdec) # 退出按钮 self.btn_exit = QtWidgets.QPushButton(self.frame) self.btn_exit.setObjectName("btn_exit") self.horizontalLayout_2.addWidget(self.btn_exit) # 显示识别后的画面 self.lab_rawimg_show = QtWidgets.QLabel(Form) self.lab_rawimg_show.setGeometry(QtCore.QRect(50, 140, 500, 500)) self.lab_rawimg_show.setMinimumSize(QtCore.QSize(500, 500)) self.lab_rawimg_show.setMaximumSize(QtCore.QSize(500, 500)) self.lab_rawimg_show.setObjectName("lab_rawimg_show") self.lab_rawimg_show.setStyleSheet(("border:2px solid red")) self.retranslateUi(Form) # 这里将按钮和定义的动作相连,通过click信号连接openfile槽? self.btn_add_file.clicked.connect(self.openpb) # 用于打开pbtxt文件 self.btn_add_pbtxt.clicked.connect(self.openpbtxt) # 用于用户输入类别数 self.btn_enter.clicked.connect(self.enter_num_cls) # 打开摄像头 self.btn_opencam.clicked.connect(self.opencam) # 开始识别 # ~ self.btn_objdec.clicked.connect(self.object_detection) # 这里是将btn_exit按钮和Form窗口相连,点击按钮发送关闭窗口命令 self.btn_exit.clicked.connect(Form.close) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): _translate = QtCore.QCoreApplication.translate Form.setWindowTitle(_translate("Form", "目标检测")) self.btn_add_file.setText(_translate("Form", "加载模型文件")) self.btn_add_pbtxt.setText(_translate("Form", "加载pbtxt文件")) self.btn_enter.setText(_translate("From", "指定识别类别数")) self.btn_opencam.setText(_translate("Form", "打开摄像头")) self.btn_objdec.setText(_translate("From", "开始识别")) self.btn_exit.setText(_translate("Form", "退出")) self.lab_rawimg_show.setText(_translate("Form", "识别效果")) def openpb(self): global openfile_name_pb openfile_name_pb, _ = QFileDialog.getOpenFileName(self.btn_add_file,'选择pb文件','/home/kanghao/','pb_files(*.pb)') print('加载模型文件地址为:' + str(openfile_name_pb)) def openpbtxt(self): global openfile_name_pbtxt openfile_name_pbtxt, _ = QFileDialog.getOpenFileName(self.btn_add_pbtxt,'选择pbtxt文件','/home/kanghao/','pbtxt_files(*.pbtxt)') print('加载标签文件地址为:' + str(openfile_name_pbtxt)) def opencam(self): self.camcapture = cv2.VideoCapture(0) self.timer = QtCore.QTimer() self.timer.start() self.timer.setInterval(100) # 0.1s刷新一次 self.timer.timeout.connect(self.camshow) def camshow(self): global camimg _ , camimg = self.camcapture.read() print(_) camimg = cv2.resize(camimg, (512, 512)) camimg = cv2.cvtColor(camimg, cv2.COLOR_BGR2RGB) print(type(camimg)) #strcamimg = camimg.tostring() showImage = QtGui.QImage(camimg.data, camimg.shape[1], camimg.shape[0], QtGui.QImage.Format_RGB888) self.lab_rawimg_show.setPixmap(QtGui.QPixmap.fromImage(showImage)) def enter_num_cls(self): global num_class num_class, okPressed = QInputDialog.getInt(self.btn_enter,'指定训练类别数','你的目标有多少类?',1,1,28,1) if okPressed: print('识别目标总类为:' + str(num_class)) def img2pixmap(self, image): Y, X = image.shape[:2] self._bgra = np.zeros((Y, X, 4), dtype=np.uint8, order='C') self._bgra[..., 0] = image[..., 2] self._bgra[..., 1] = image[..., 1] self._bgra[..., 2] = image[..., 0] qimage = QtGui.QImage(self._bgra.data, X, Y, QtGui.QImage.Format_RGB32) pixmap = QtGui.QPixmap.fromImage(qimage) return pixmap def object_detection(self): sys.path.append("..") from object_detection.utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') from utils import label_map_util from utils import visualization_utils as vis_util # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = openfile_name_pb # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = openfile_name_pbtxt NUM_CLASSES = num_class detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. TEST_IMAGE_PATHS = camimg print(TEST_IMAGE_PATHS) # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict #image = Image.open(TEST_IMAGE_PATHS) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(TEST_IMAGE_PATHS) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) #plt.savefig(str(TEST_IMAGE_PATHS)+".jpg") ## 用于显示ui界面的命令 if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) Window = QtWidgets.QWidget() # ui为根据类Ui_From()创建的实例 ui = UiForm() ui.setupUi(Window) Window.show() sys.exit(app.exec_()) ``` 但是运行提示: ![图片说明](https://img-ask.csdn.net/upload/201811/30/1543567054_511116.png) 求助
相见恨晚的超实用网站
搞学习 知乎:www.zhihu.com 简答题:http://www.jiandati.com/ 网易公开课:https://open.163.com/ted/ 网易云课堂:https://study.163.com/ 中国大学MOOC:www.icourse163.org 网易云课堂:study.163.com 哔哩哔哩弹幕网:www.bilibili.com 我要自学网:www.51zxw
花了20分钟,给女朋友们写了一个web版群聊程序
参考博客 [1]https://www.byteslounge.com/tutorials/java-ee-html5-websocket-example
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
开源一个功能完整的SpringBoot项目框架
福利来了,给大家带来一个福利。 最近想了解一下有关Spring Boot的开源项目,看了很多开源的框架,大多是一些demo或者是一个未成形的项目,基本功能都不完整,尤其是用户权限和菜单方面几乎没有完整的。 想到我之前做的框架,里面通用模块有:用户模块,权限模块,菜单模块,功能模块也齐全了,每一个功能都是完整的。 打算把这个框架分享出来,供大家使用和学习。 为什么用框架? 框架可以学习整体
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
Python——画一棵漂亮的樱花树(不同种樱花+玫瑰+圣诞树喔)
最近翻到一篇知乎,上面有不少用Python(大多是turtle库)绘制的树图,感觉很漂亮,我整理了一下,挑了一些我觉得不错的代码分享给大家(这些我都测试过,确实可以生成喔~) one 樱花树 动态生成樱花 效果图(这个是动态的): 实现代码 import turtle as T import random import time # 画樱花的躯干(60,t) def Tree(branch
深深的码丨Java HashMap 透析
HashMap 相关概念 HashTab、HashMap、TreeMap 均以键值对像是存储或操作数据元素。HashTab继承自Dictionary,HashMap、TreeMap继承自AbstractMap,三者均实现Map接口 **HashTab:**同步哈希表,不支持null键或值,因为同步导致性能影响,很少被使用 **HashMap:**应用较多的非同步哈希表,支持null键或值,是键值对...
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
JDK12 Collectors.teeing 你真的需要了解一下
前言 在 Java 12 里面有个非常好用但在官方 JEP 没有公布的功能,因为它只是 Collector 中的一个小改动,它的作用是 merge 两个 collector 的结果,这句话显得很抽象,老规矩,我们先来看个图(这真是一个不和谐的图????): 管道改造经常会用这个小东西,通常我们叫它「三通」,它的主要作用就是将 downstream1 和 downstre...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车?某胡同口的煎饼摊一年能卖出多少个煎饼?深圳有多少个产品经理?一辆公交车里能装下多少个乒乓球?一个正常成年人有多少根头发?这类估算问题,被称为费米问题,是以科学家费米命名的。为什么面试会问这种问题呢?这类问题能把两类人清楚地区分出来。一类是具有文科思维的人,擅长赞叹和模糊想象,它主要依靠的是人的第一反应和直觉,比如小孩...
碎片化的时代,如何学习
今天周末,和大家聊聊学习这件事情。 在如今这个社会,我们的时间被各类 APP 撕的粉碎。 刷知乎、刷微博、刷朋友圈; 看论坛、看博客、看公号; 等等形形色色的信息和知识获取方式一个都不错过。 貌似学了很多,但是却感觉没什么用。 要解决上面这些问题,首先要分清楚一点,什么是信息,什么是知识。 那什么是信息呢? 你一切听到的、看到的,都是信息,比如微博上的明星出轨、微信中的表情大战、抖音上的...
so easy! 10行代码写个"狗屁不通"文章生成器
前几天,GitHub 有个开源项目特别火,只要输入标题就可以生成一篇长长的文章。 背后实现代码一定很复杂吧,里面一定有很多高深莫测的机器学习等复杂算法 不过,当我看了源代码之后 这程序不到50行 尽管我有多年的Python经验,但我竟然一时也没有看懂 当然啦,原作者也说了,这个代码也是在无聊中诞生的,平时撸码是不写中文变量名的, 中文...
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的回答,对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalking,作者吴晟、刘浩杨 等等 仓库地址: apache/skywalking 更...
MySQL数据库总结
一、数据库简介 数据库(Database,DB)是按照数据结构来组织,存储和管理数据的仓库。 典型特征:数据的结构化、数据间的共享、减少数据的冗余度,数据的独立性。 关系型数据库:使用关系模型把数据组织到数据表(table)中。现实世界可以用数据来描述。 主流的关系型数据库产品:Oracle(Oracle)、DB2(IBM)、SQL Server(MS)、MySQL(Oracle)。 数据表:数...
20行Python代码爬取王者荣耀全英雄皮肤
引言 王者荣耀大家都玩过吧,没玩过的也应该听说过,作为时下最火的手机MOBA游戏,咳咳,好像跑题了。我们今天的重点是爬取王者荣耀所有英雄的所有皮肤,而且仅仅使用20行Python代码即可完成。 准备工作 爬取皮肤本身并不难,难点在于分析,我们首先得得到皮肤图片的url地址,话不多说,我们马上来到王者荣耀的官网: 我们点击英雄资料,然后随意地选择一位英雄,接着F12打开调试台,找到英雄原皮肤的图片...
走进高并发(二)Java并行程序基础
一、进程和线程 在操作系统这门课程中,对进程的定义是这样的: 进程是计算机中的程序关于某数据集合上的一次运行活动,是系统进行资源分配和调度的基本单位,是操作系统结构的基础。在早期面向进程设计的计算机结构中,进行是程序的基本执行实体;在当代面向线程设计的计算机结构中,进程是线程的容器。 上面的定义很完整,对进程进行了全方面的定义,但是貌似进程是看不见摸不着的一个东西,实际上,我们可以通过查看计算...
相关热词 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片 c# 图片颜色调整 最快 c#多张图片上传 c#密封类与密封方法
立即提问