使用tesorflow中model_main.py遇到的问题! 5C

直接上代码了

D:\tensorflow\models\research\object_detection>python model_main.py --pipeline_config_path=E:\python_demo\pedestrian_demo\pedestrian_train\models\pipeline.config --model_dir=E:\python_demo\pedestrian_demo\pedestrian_train\models\train --num_train_steps=5000 --sample_1_of_n_eval_examples=1 --alsologstderr
Traceback (most recent call last):
  File "model_main.py", line 109, in <module>
    tf.app.run()
  File "C:\anaconda\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
    _sys.exit(main(argv))
  File "model_main.py", line 71, in main
    FLAGS.sample_1_of_n_eval_on_train_examples))
  File "D:\ssd-detection\models-master\research\object_detection\model_lib.py", line 589, in create_estimator_and_inputs
    pipeline_config_path, config_override=config_override)
  File "D:\ssd-detection\models-master\research\object_detection\utils\config_util.py", line 98, in get_configs_from_pipeline_file
    text_format.Merge(proto_str, pipeline_config)
  File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 574, in Merge
    descriptor_pool=descriptor_pool)
  File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 631, in MergeLines
    return parser.MergeLines(lines, message)
  File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 654, in MergeLines
    self._ParseOrMerge(lines, message)
  File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 676, in _ParseOrMerge
    self._MergeField(tokenizer, message)
  File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField
    merger(tokenizer, message, field)
  File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField
    self._MergeField(tokenizer, sub_message)
  File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField
    merger(tokenizer, message, field)
  File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField
    self._MergeField(tokenizer, sub_message)
  File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField
    merger(tokenizer, message, field)
  File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField
    self._MergeField(tokenizer, sub_message)
  File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 768, in _MergeField
    (message_descriptor.full_name, name))
google.protobuf.text_format.ParseError: 35:7 : Message type "object_detection.protos.SsdFeatureExtractor" has no field named "batch_norm_trainable".

这个错误怎么解决,求大神指导~

1个回答

看下 object_detection.protos.SsdFeatureExtractor 的定义。
之前的问题如果解决了请点采纳,看到你的采纳率是0,这样不利于详细回答你的问题

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
安装Tensorflow object detection API之后运行model_builder_test.py报错?

``` Traceback (most recent call last): File "G:\python\models\research\object_detection\builders\model_builder_test.py", line 23, in <module> from object_detection.builders import model_builder File "G:\python\models\research\object_detection\builders\model_builder.py", line 20, in <module> from object_detection.builders import anchor_generator_builder File "G:\python\models\research\object_detection\builders\anchor_generator_builder.py", line 22, in <module> from object_detection.protos import anchor_generator_pb2 File "G:\python\models\research\object_detection\protos\anchor_generator_pb2.py", line 29, in <module> dependencies=[object__detection_dot_protos_dot_flexible__grid__anchor__generator__pb2.DESCRIPTOR,object__detection_dot_protos_dot_grid__anchor__generator__pb2.DESCRIPTOR,object__detection_dot_protos_dot_multiscale__anchor__generator__pb2.DESCRIPTOR,object__detection_dot_protos_dot_ssd__anchor__generator__pb2.DESCRIPTOR,]) File "G:\python\python setup\lib\site-packages\google\protobuf\descriptor.py", line 879, in __new__ return _message.default_pool.AddSerializedFile(serialized_pb) TypeError: Couldn't build proto file into descriptor pool! Invalid proto descriptor for file "object_detection/protos/anchor_generator.proto": object_detection/protos/flexible_grid_anchor_generator.proto: Import "object_detection/protos/flexible_grid_anchor_generator.proto" has not been loaded. object_detection/protos/multiscale_anchor_generator.proto: Import "object_detection/protos/multiscale_anchor_generator.proto" has not been loaded. object_detection.protos.AnchorGenerator.multiscale_anchor_generator: "object_detection.protos.MultiscaleAnchorGenerator" seems to be defined in "protos/multiscale_anchor_generator.proto", which is not imported by "object_detection/protos/anchor_generator.proto". To use it here, please add the necessary import. object_detection.protos.AnchorGenerator.flexible_grid_anchor_generator: "object_detection.protos.FlexibleGridAnchorGenerator" seems to be defined in "protos/flexible_grid_anchor_generator.proto", which is not imported by "object_detection/protos/anchor_generator.proto". To use it here, please add the necessary import. ``` 网上找了各种方法都没用,有些可能有用的但是不够详细。

ValueError: No data files found in satellite/data\satellite_train_*.tfrecord

跟着书做"打造自己的图片识别模型"项目时候遇到报错,报错早不到数据文件,但是文件路径和数据都没问题 D:\Anaconda\anaconda\envs\tensorflow\python.exe D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py --train_dir=satellite/train_dir --dataset_name=satellite --dataset_split_name=train --dataset_dir=satellite/data --model_name=inception_v3 --checkpoint_path=satellite/pretrained/inception_v3.ckpt --checkpoint_exclude_scopes=InceptionV3/Logits,InceptionV3/AuxLogits --trainable_scopes=InceptionV3/Logits,InceptionV3/AuxLogits --max_number_of_steps=100000 --batch_size=32 --learning_rate=0.001 --learning_rate_decay_type=fixed --save_interval_secs=300 --save_summaries_secs=2 --log_every_n_steps=10 --optimizer=rmsprop --weight_decay=0.00004 WARNING:tensorflow:From D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py:397: create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.create_global_step Traceback (most recent call last): File "D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py", line 572, in <module> tf.app.run() File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "D:/PyCharm/PycharmProjects/chapter_3/slim/train_image_classifier.py", line 430, in main common_queue_min=10 * FLAGS.batch_size) File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\dataset_data_provider.py", line 94, in __init__ scope=scope) File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\parallel_reader.py", line 238, in parallel_read data_files = get_data_files(data_sources) File "D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\slim\python\slim\data\parallel_reader.py", line 311, in get_data_files raise ValueError('No data files found in %s' % (data_sources,)) ValueError: No data files found in satellite/data\satellite_train_*.tfrecord

用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神

用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神 执行语句是: ``` python model_main.py --model_dir=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training --pipeline_config_path=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training/mask_rcnn_inception_v2_coco.config ``` 报错信息如下: ``` WARNING:tensorflow:Forced number of epochs for all eval validations to be 1. WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1. WARNING:tensorflow:Estimator's model_fn (<function create_model_fn.<locals>.model_fn at 0x000001C1EA335C80>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards. Traceback (most recent call last): File "model_main.py", line 109, in <module> tf.app.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run _sys.exit(main(argv)) File "model_main.py", line 105, in main tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0]) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 439, in train_and_evaluate executor.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 518, in run self.run_local() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 650, in run_local hooks=train_hooks) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 363, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 843, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 853, in _train_model_default input_fn, model_fn_lib.ModeKeys.TRAIN)) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 691, in _get_features_and_labels_from_input_fn result = self._call_input_fn(input_fn, mode) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 798, in _call_input_fn return input_fn(**kwargs) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 525, in _train_input_fn batch_size=params['batch_size'] if params else train_config.batch_size) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 149, in build dataset = data_map_fn(process_fn, num_parallel_calls=num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 853, in map return ParallelMapDataset(self, map_func, num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1870, in __init__ super(ParallelMapDataset, self).__init__(input_dataset, map_func) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1839, in __init__ self._map_func.add_to_graph(ops.get_default_graph()) File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 484, in add_to_graph self._create_definition_if_needed() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 319, in _create_definition_if_needed self._create_definition_if_needed_impl() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 336, in _create_definition_if_needed_impl outputs = self._func(*inputs) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1804, in tf_map_func ret = map_func(nested_args) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 130, in process_fn processed_tensors = transform_input_data_fn(processed_tensors) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 515, in transform_and_pad_input_data_fn tensor_dict=transform_data_fn(tensor_dict), File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 129, in transform_input_data tf.expand_dims(tf.to_float(image), axis=0)) File "D:\Tensorflow\tf\models\research\object_detection\meta_architectures\faster_rcnn_meta_arch.py", line 543, in preprocess parallel_iterations=self._parallel_iterations) File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in static_or_dynamic_map_fn outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in <listcomp> outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2264, in resize_to_range lambda: _resize_portrait_image(image)) File "E:\Python3.6\lib\site-packages\tensorflow\python\util\deprecation.py", line 432, in new_func return func(*args, **kwargs) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2063, in cond orig_res_t, res_t = context_t.BuildCondBranch(true_fn) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 1913, in BuildCondBranch original_result = fn() File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2263, in <lambda> lambda: _resize_landscape_image(image), File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2245, in _resize_landscape_image align_corners=align_corners, preserve_aspect_ratio=True) TypeError: resize_images() got an unexpected keyword argument 'preserve_aspect_ratio' ``` 根据提示的最后一句,是说没有一个有效参数 我用的是TensorFlow1.8 python3.6,下载的最新的TensorFlow-models-master

Tensorflow object detection API 训练自己数据时报错 Windows fatal exception: access violation

python3.6, tf 1.14.0,Tensorflow object detection API 跑demo图片和改为摄像头进行物体识别均正常, 训练自己的数据训练自己数据时报错 Windows fatal exception: access violation 用的ssd_mobilenet_v1_coco_2018_01_28模型, 命令:python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr 其实就是按照网上基础的训练来的,一直报这个,具体错误输出如下: (py36) D:\pythonpro\TensorFlowLearn\face_tf_model>python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr WARNING: Logging before flag parsing goes to stderr. W0622 16:50:30.230578 14180 lazy_loader.py:50] The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue. W0622 16:50:30.317274 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\inception_resnet_v2.py:373: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead. W0622 16:50:30.355400 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\mobilenet\mobilenet.py:397: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead. W0622 16:50:30.388313 14180 deprecation_wrapper.py:119] From model_main.py:109: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead. W0622 16:50:30.397290 14180 deprecation_wrapper.py:119] From D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py:98: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead. Windows fatal exception: access violation Current thread 0x00003764 (most recent call first): File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 84 in _preread_check File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 122 in read File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py", line 99 in get_configs_from_pipeline_file File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\model_lib.py", line 606 in create_estimator_and_inputs File "model_main.py", line 71 in main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 251 in _run_main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 300 in run File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\platform\app.py", line 40 in run File "model_main.py", line 109 in <module> (py36) D:\pythonpro\TensorFlowLearn\face_tf_model> 请大神指点下

在训练Tensorflow模型(object_detection)时,训练在第一次评估后退出,怎么使训练继续下去?

当我进行ssd模型训练时,训练进行了10分钟,然后进入评估阶段,评估之后程序就自动退出了,没有看到误和警告,这是为什么,怎么让程序一直训练下去? 训练命令: ``` python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr ``` 配置文件: ``` training exit after the first evaluation(only one evaluation) in Tensorflow model(object_detection) without error and waring System information What is the top-level directory of the model you are using:models/research/object_detection/ Have I written custom code (as opposed to using a stock example script provided in TensorFlow):NO OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Windows-10(64bit) TensorFlow installed from (source or binary):conda install tensorflow-gpu TensorFlow version (use command below):1.13.1 Bazel version (if compiling from source):N/A CUDA/cuDNN version:cudnn-7.6.0 GPU model and memory:GeForce GTX 1060 6GB Exact command to reproduce:See below my command for training : python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr This is my config : train_config { batch_size: 24 data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { ssd_random_crop { } } optimizer { rms_prop_optimizer { learning_rate { exponential_decay_learning_rate { initial_learning_rate: 0.00400000018999 decay_steps: 800720 decay_factor: 0.949999988079 } } momentum_optimizer_value: 0.899999976158 decay: 0.899999976158 epsilon: 1.0 } } fine_tune_checkpoint: "D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt" from_detection_checkpoint: true num_steps: 200000 train_input_reader { label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt" tf_record_input_reader { input_path: "D:/gitcode/models/research/object_detection/idol/train/Iframe_??????.tfrecord" } } eval_config { num_examples: 8000 max_evals: 10 use_moving_averages: false } eval_input_reader { label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt" shuffle: false num_readers: 1 tf_record_input_reader { input_path: "D:/gitcode/models/research/object_detection/idol/eval/Iframe_??????.tfrecord" } ``` 窗口输出: (default) D:\gitcode\models\research>python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md https://github.com/tensorflow/addons If you depend on functionality not listed there, please file an issue. WARNING:tensorflow:Forced number of epochs for all eval validations to be 1. WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered eval_on_train_input_config.num_epochs = 0. Overwriting num_epochs to 1. WARNING:tensorflow:Estimator's model_fn (<function create_model_fn..model_fn at 0x0000027CBAB7BB70>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:86: parallel_interleave (from tensorflow.contrib.data.python.ops.interleave_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.experimental.parallel_interleave(...). WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\core\preprocessor.py:196: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version. Instructions for updating: seed2 arg is deprecated.Use sample_distorted_bounding_box_v2 instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:158: batch_and_drop_remainder (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.Dataset.batch(..., drop_remainder=True). WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\losses\losses_impl.py:448: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\array_grad.py:425: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. 2019-08-14 16:29:31.607841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7845 pciBusID: 0000:04:00.0 totalMemory: 6.00GiB freeMemory: 4.97GiB 2019-08-14 16:29:31.621836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-08-14 16:29:32.275712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-14 16:29:32.283072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-08-14 16:29:32.288675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-08-14 16:29:32.293514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1) WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\eval_util.py:796: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\utils\visualization_utils.py:498: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. Instructions for updating: tf.py_func is deprecated in TF V2. Instead, use tf.py_function, which takes a python function which manipulates tf eager tensors instead of numpy arrays. It's easy to convert a tf eager tensor to an ndarray (just call tensor.numpy()) but having access to eager tensors means tf.py_functions can use accelerators such as GPUs as well as being differentiable using a gradient tape. 2019-08-14 16:41:44.736212: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-08-14 16:41:44.741242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-14 16:41:44.747522: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-08-14 16:41:44.751256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-08-14 16:41:44.755548: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1) WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\training\saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. creating index... index created! creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=2.43s). Accumulating evaluation results... DONE (t=0.14s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.287 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.529 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.278 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.031 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.312 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.162 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.061 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.384 (default) D:\gitcode\models\research>

代码用tensorflow-CPU运行时没有错误,用GPU运行时每次到51%报错

![图片说明](https://img-ask.csdn.net/upload/201908/12/1565613050_565691.png) 51%|████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 199/391 [00:38<00:21, 8.81it/s]2019-08-12 20:20:04.963304: I tensorflow/core/kernels/cuda_solvers.cc:159] Creating CudaSolver handles for stream 0000016EAC1D0A40 2019-08-12 20:20:05.763636: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 85505 for batch index 0, expected info = 0. Debug_info = heevd ** On entry to SGEMM parameter number 10 had an illegal value 2019-08-12 20:20:06.320473: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 5236925 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:06.328931: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 1871 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:06.838588: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 687520 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:06.850771: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 321 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:06.999345: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 42770 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:07.499292: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 1497278 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:07.510245: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 321 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:08.020011: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 256112 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:08.529828: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 341471 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:08.540870: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 16833 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:08.697339: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 1190 for batch index 0, expected info = 0. Debug_info = heevd Traceback (most recent call last): File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call return fn(*args) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: Got info = 85505 for batch index 0, expected info = 0. Debug_info = heevd [[{{node KFAC/SelfAdjointEigV2_10}}]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "main.py", line 67, in <module> main() File "main.py", line 63, in main trainer.train() File "E:\python代码\noisy-K-FAC\noisy-K-FAC\core\train.py", line 16, in train self.train_epoch() File "E:\python代码\noisy-K-FAC\noisy-K-FAC\core\train.py", line 42, in train_epoch self.sess.run([self.model.inv_update_op, self.model.var_update_op], feed_dict=feed_dict) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 929, in run run_metadata_ptr) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run feed_dict_tensor, options, run_metadata) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run run_metadata) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Got info = 85505 for batch index 0, expected info = 0. Debug_info = heevd [[node KFAC/SelfAdjointEigV2_10 (defined at E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\utils.py:161) ]] Caused by op 'KFAC/SelfAdjointEigV2_10', defined at: File "main.py", line 67, in <module> main() File "main.py", line 60, in main model_ = Model(config, _INPUT_DIM[config.dataset], len(train_loader.dataset)) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\core\model.py", line 21, in __init__ self.init_optim() File "E:\python代码\noisy-K-FAC\noisy-K-FAC\core\model.py", line 70, in init_optim momentum=self.config.momentum) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\optimizer.py", line 66, in __init__ inv_devices=inv_devices) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\estimator.py", line 58, in __init__ setup = self._setup(cov_ema_decay) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\estimator.py", line 108, in _setup inv_updates = {op.name: op for op in self._get_all_inverse_update_ops()} File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\estimator.py", line 108, in <dictcomp> inv_updates = {op.name: op for op in self._get_all_inverse_update_ops()} File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\estimator.py", line 116, in _get_all_inverse_update_ops for op in factor.make_inverse_update_ops(): File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\fisher_factors.py", line 360, in make_inverse_update_ops ops.append(inv.assign(utils.posdef_inv(self._cov, damping))) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\utils.py", line 144, in posdef_inv return posdef_inv_functions[POSDEF_INV_METHOD](tensor, identity, damping) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\utils.py", line 161, in posdef_inv_eig tensor + damping * identity) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\ops\linalg_ops.py", line 328, in self_adjoint_eig e, v = gen_linalg_ops.self_adjoint_eig_v2(tensor, compute_v=True, name=name) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\ops\gen_linalg_ops.py", line 2016, in self_adjoint_eig_v2 "SelfAdjointEigV2", input=input, compute_v=compute_v, name=name) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper op_def=op_def) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func return func(*args, **kwargs) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op op_def=op_def) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__ self._traceback = tf_stack.extract_stack() InvalidArgumentError (see above for traceback): Got info = 85505 for batch index 0, expected info = 0. Debug_info = heevd [[node KFAC/SelfAdjointEigV2_10 (defined at E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\utils.py:161) ]] ``` ```

加载resnet网络 训练好PB模型加载的时候遇到如下错误? 如何解决? 求助

``` 2019-11-27 02:18:29 UTC [MainThread ] - /home/mind/app.py[line:121] - INFO: args: Namespace(model_name='serve', model_path='/home/mind/model/1', service_file='/home/mind/model/1/customize_service.py', tf_server_name='127.0.0.1') 2019-11-27 02:18:36.823910: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA Using TensorFlow backend. [2019-11-27 02:18:37 +0000] [68] [ERROR] Exception in worker process Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker worker.init_process() File "/usr/local/lib/python3.6/site-packages/gunicorn/workers/base.py", line 129, in init_process self.load_wsgi() File "/usr/local/lib/python3.6/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi self.wsgi = self.app.wsgi() File "/usr/local/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load return self.load_wsgiapp() File "/usr/local/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp return util.import_app(self.app_uri) File "/usr/local/lib/python3.6/site-packages/gunicorn/util.py", line 350, in import_app __import__(module) File "/home/mind/app.py", line 145, in model_service = class_defs[0](model_name, model_path) File "/home/mind/model/1/customize_service.py", line 39, in __init__ meta_graph_def = tf.saved_model.loader.load(self.sess, [tag_constants.SERVING], self.model_path) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/saved_model/loader_impl.py", line 219, in load saver = tf_saver.import_meta_graph(meta_graph_def_to_load, **saver_kwargs) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1955, in import_meta_graph **kwargs) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/meta_graph.py", line 743, in import_scoped_meta_graph producer_op_list=producer_op_list) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 432, in new_func return func(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 460, in import_graph_def _RemoveDefaultAttrs(op_dict, producer_op_list, graph_def) File "/usr/local/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 227, in _RemoveDefaultAttrs op_def = op_dict[node.op] KeyError: 'DivNoNan' ```

在运行tensorflow MNIST 里的例子时报错

/tensorflow-master/tensorflow/examples/tutorials/mnist$ python fully_connected_feed.py /usr/local/lib/python2.7/dist-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20. "This module will be removed in 0.20.", DeprecationWarning) Traceback (most recent call last): File "fully_connected_feed.py", line 277, in <module> tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) TypeError: run() got an unexpected keyword argument 'argv' 我是从GITHUB上下载的包,代码也没改,运行的fully_connceted_feed.py时报错

TensorFlow Object Detection API 训练过程相关问题?

``` 2019-03-22 11:47:37.264972: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:08:00.0, compute capability: 6.1) ``` Tensorflow能正常值用。 启动model_main().py后卡在这不动、相关文件夹中没有生成.ckpt文件。 ![图片说明](https://img-ask.csdn.net/upload/201903/22/1553233226_55747.jpg) 是我显卡太垃圾、计算慢还是其他原因啊???求大神。

tensorflow代码用CPU运行时没有错误,用GPU运行时每次到51%报错,网上没有搜到相同的问题

51%|████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 199/391 [00:38<00:21, 8.81it/s]2019-08-12 20:20:04.963304: I tensorflow/core/kernels/cuda_solvers.cc:159] Creating CudaSolver handles for stream 0000016EAC1D0A40 2019-08-12 20:20:05.763636: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 85505 for batch index 0, expected info = 0. Debug_info = heevd ** On entry to SGEMM parameter number 10 had an illegal value 2019-08-12 20:20:06.320473: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 5236925 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:06.328931: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 1871 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:06.838588: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 687520 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:06.850771: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 321 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:06.999345: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 42770 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:07.499292: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 1497278 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:07.510245: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 321 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:08.020011: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 256112 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:08.529828: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 341471 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:08.540870: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 16833 for batch index 0, expected info = 0. Debug_info = heevd 2019-08-12 20:20:08.697339: W tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at cuda_solvers.cc:260 : Invalid argument: Got info = 1190 for batch index 0, expected info = 0. Debug_info = heevd Traceback (most recent call last): File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call return fn(*args) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: Got info = 85505 for batch index 0, expected info = 0. Debug_info = heevd [[{{node KFAC/SelfAdjointEigV2_10}}]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "main.py", line 67, in <module> main() File "main.py", line 63, in main trainer.train() File "E:\python代码\noisy-K-FAC\noisy-K-FAC\core\train.py", line 16, in train self.train_epoch() File "E:\python代码\noisy-K-FAC\noisy-K-FAC\core\train.py", line 42, in train_epoch self.sess.run([self.model.inv_update_op, self.model.var_update_op], feed_dict=feed_dict) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 929, in run run_metadata_ptr) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run feed_dict_tensor, options, run_metadata) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run run_metadata) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Got info = 85505 for batch index 0, expected info = 0. Debug_info = heevd [[node KFAC/SelfAdjointEigV2_10 (defined at E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\utils.py:161) ]] Caused by op 'KFAC/SelfAdjointEigV2_10', defined at: File "main.py", line 67, in <module> main() File "main.py", line 60, in main model_ = Model(config, _INPUT_DIM[config.dataset], len(train_loader.dataset)) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\core\model.py", line 21, in __init__ self.init_optim() File "E:\python代码\noisy-K-FAC\noisy-K-FAC\core\model.py", line 70, in init_optim momentum=self.config.momentum) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\optimizer.py", line 66, in __init__ inv_devices=inv_devices) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\estimator.py", line 58, in __init__ setup = self._setup(cov_ema_decay) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\estimator.py", line 108, in _setup inv_updates = {op.name: op for op in self._get_all_inverse_update_ops()} File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\estimator.py", line 108, in <dictcomp> inv_updates = {op.name: op for op in self._get_all_inverse_update_ops()} File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\estimator.py", line 116, in _get_all_inverse_update_ops for op in factor.make_inverse_update_ops(): File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\fisher_factors.py", line 360, in make_inverse_update_ops ops.append(inv.assign(utils.posdef_inv(self._cov, damping))) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\utils.py", line 144, in posdef_inv return posdef_inv_functions[POSDEF_INV_METHOD](tensor, identity, damping) File "E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\utils.py", line 161, in posdef_inv_eig tensor + damping * identity) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\ops\linalg_ops.py", line 328, in self_adjoint_eig e, v = gen_linalg_ops.self_adjoint_eig_v2(tensor, compute_v=True, name=name) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\ops\gen_linalg_ops.py", line 2016, in self_adjoint_eig_v2 "SelfAdjointEigV2", input=input, compute_v=compute_v, name=name) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper op_def=op_def) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func return func(*args, **kwargs) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op op_def=op_def) File "D:\softAPP\python\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__ self._traceback = tf_stack.extract_stack() InvalidArgumentError (see above for traceback): Got info = 85505 for batch index 0, expected info = 0. Debug_info = heevd [[node KFAC/SelfAdjointEigV2_10 (defined at E:\python代码\noisy-K-FAC\noisy-K-FAC\ops\utils.py:161) ]] ``` ```

训练模型前运行train.py报错?

感谢各位大佬!!!! 经历重重困难,终于走带最后一步,结果却还是卡在了这里,话不多说,上图。 ![图片说明](https://img-ask.csdn.net/upload/201908/01/1564625533_691952.png) 这个问题已经是第二次了,在我使用generate——tfrecord.py时也出现过类型的错误,因为但是还是生成了.record文件,就没有去管它,现在好像不行了。

运行python批处理文件,报了一个缺少属性“key”错误,找到源码但是看不懂,求大佬们解答

在使用自己的数据集做训练时,用到一个retrain.py的文件,之后通过批处理文件运行 以下是批处理文件代码: ``` python D:\python\Anaconda\envs\tensorflow\tensorflow-master\tensorflow\examples\image_retraining\retrain.py ^ --bottleneck_dir bottleneck ^ --how_many_training_steps 200 ^ --model_dir D:\python\Anaconda\envs\tensorflow\inception_model ^ --output_graph output_graph.pb ^ --output_labels output_labels.txt ^ --image_dir data/train/ pause ``` 调用时,报了 File "D:\python\Anaconda\envs\tensorflow\tensorflow-master\tensorflow\examples\image_retraining\retrain.py", line 1313, in <module> tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) File "D:\python\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run _sys.exit(main(argv)) File "D:\python\Anaconda\envs\tensorflow\tensorflow-master\tensorflow\examples\image_retraining\retrain.py", line 982, in main class_count = len(image_lists.keys()) AttributeError: 'NoneType' object has no attribute 'keys' 错误最后是说982行缺少keys属性,但不知如何修改 retrain代码太长,只能把链接发上来

ValueError: None values not supported.

Traceback (most recent call last): File "document_summarizer_training_testing.py", line 296, in <module> tf.app.run() File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "document_summarizer_training_testing.py", line 291, in main train() File "document_summarizer_training_testing.py", line 102, in train model = MY_Model(sess, len(vocab_dict)-2) File "/home/lyliu/Refresh-master-self-attention/my_model.py", line 70, in __init__ self.train_op_policynet_expreward = model_docsum.train_neg_expectedreward(self.rewardweighted_cross_entropy_loss_multi File "/home/lyliu/Refresh-master-self-attention/model_docsum.py", line 835, in train_neg_expectedreward grads_and_vars_capped_norm = [(tf.clip_by_norm(grad, 5.0), var) for grad, var in grads_and_vars] File "/home/lyliu/Refresh-master-self-attention/model_docsum.py", line 835, in <listcomp> grads_and_vars_capped_norm = [(tf.clip_by_norm(grad, 5.0), var) for grad, var in grads_and_vars] File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/ops/clip_ops.py", line 107,rm t = ops.convert_to_tensor(t, name="t") File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 676o_tensor as_ref=False) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 741convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", onstant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "/home/lyliu/anaconda3/envs/tensorflowgpu/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", ake_tensor_proto raise ValueError("None values not supported.") ValueError: None values not supported. 使用tensorflow gpu版本 tensorflow 1.2.0。希望找到解决方法或者出现这个错误的原因

我的mnist运行报错,请问是那出现问题了?

from __future__ import absolute_import from __future__ import division from __future__ import print_function import argparse #解析训练和检测数据模块 import sys from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf FLAGS = None def main(_): # Import data mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784]) #此函数可以理解为形参,用于定义过程,在执行的时候再赋具体的值 W = tf.Variable(tf.zeros([784, 10])) # tf.zeros表示所有的维度都为0 b = tf.Variable(tf.zeros([10])) y = tf.matmul(x, W) + b #对应每个分类概率值。 # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10]) # The raw formulation of cross-entropy, # # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)), # reduction_indices=[1])) # # can be numerically unstable. # # So here we use tf.nn.softmax_cross_entropy_with_logits on the raw # outputs of 'y', and then average across the batch. cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) sess = tf.InteractiveSession() tf.global_variables_initializer().run() # Train for _ in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--data_dir', type=str, default='/tmp/tensorflow/mnist/input_data', help='Directory for storing input data') FLAGS, unparsed = parser.parse_known_args() tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) ``` ```下面是报错: TimeoutError Traceback (most recent call last) ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args) 1317 h.request(req.get_method(), req.selector, req.data, headers, -> 1318 encode_chunked=req.has_header('Transfer-encoding')) 1319 except OSError as err: # timeout error ~\Anaconda3\envs\tensorflow\lib\http\client.py in request(self, method, url, body, headers, encode_chunked) 1238 """Send a complete request to the server.""" -> 1239 self._send_request(method, url, body, headers, encode_chunked) 1240 ~\Anaconda3\envs\tensorflow\lib\http\client.py in _send_request(self, method, url, body, headers, encode_chunked) 1284 body = _encode(body, 'body') -> 1285 self.endheaders(body, encode_chunked=encode_chunked) 1286 ~\Anaconda3\envs\tensorflow\lib\http\client.py in endheaders(self, message_body, encode_chunked) 1233 raise CannotSendHeader() -> 1234 self._send_output(message_body, encode_chunked=encode_chunked) 1235 ~\Anaconda3\envs\tensorflow\lib\http\client.py in _send_output(self, message_body, encode_chunked) 1025 del self._buffer[:] -> 1026 self.send(msg) 1027 ~\Anaconda3\envs\tensorflow\lib\http\client.py in send(self, data) 963 if self.auto_open: --> 964 self.connect() 965 else: ~\Anaconda3\envs\tensorflow\lib\http\client.py in connect(self) 1399 self.sock = self._context.wrap_socket(self.sock, -> 1400 server_hostname=server_hostname) 1401 if not self._context.check_hostname and self._check_hostname: ~\Anaconda3\envs\tensorflow\lib\ssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session) 400 server_hostname=server_hostname, --> 401 _context=self, _session=session) 402 ~\Anaconda3\envs\tensorflow\lib\ssl.py in __init__(self, sock, keyfile, certfile, server_side, cert_reqs, ssl_version, ca_certs, do_handshake_on_connect, family, type, proto, fileno, suppress_ragged_eofs, npn_protocols, ciphers, server_hostname, _context, _session) 807 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets") --> 808 self.do_handshake() 809 ~\Anaconda3\envs\tensorflow\lib\ssl.py in do_handshake(self, block) 1060 self.settimeout(None) -> 1061 self._sslobj.do_handshake() 1062 finally: ~\Anaconda3\envs\tensorflow\lib\ssl.py in do_handshake(self) 682 """Start the SSL/TLS handshake.""" --> 683 self._sslobj.do_handshake() 684 if self.context.check_hostname: TimeoutError: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。 During handling of the above exception, another exception occurred: URLError Traceback (most recent call last) <ipython-input-1-eaf9732201f9> in <module>() 57 help='Directory for storing input data') 58 FLAGS, unparsed = parser.parse_known_args() ---> 59 tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py in run(main, argv) 46 # Call the main function, passing through any arguments 47 # to the final program. ---> 48 _sys.exit(main(_sys.argv[:1] + flags_passthrough)) 49 50 <ipython-input-1-eaf9732201f9> in main(_) 15 def main(_): 16 # Import data ---> 17 mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) 18 19 # Create the model ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py in read_data_sets(train_dir, fake_data, one_hot, dtype, reshape, validation_size, seed) 238 239 local_file = base.maybe_download(TRAIN_LABELS, train_dir, --> 240 SOURCE_URL + TRAIN_LABELS) 241 with open(local_file, 'rb') as f: 242 train_labels = extract_labels(f, one_hot=one_hot) ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py in maybe_download(filename, work_directory, source_url) 206 filepath = os.path.join(work_directory, filename) 207 if not gfile.Exists(filepath): --> 208 temp_file_name, _ = urlretrieve_with_retry(source_url) 209 gfile.Copy(temp_file_name, filepath) 210 with gfile.GFile(filepath) as f: ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py in wrapped_fn(*args, **kwargs) 163 for delay in delays(): 164 try: --> 165 return fn(*args, **kwargs) 166 except Exception as e: # pylint: disable=broad-except) 167 if is_retriable is None: ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py in urlretrieve_with_retry(url, filename) 188 @retry(initial_delay=1.0, max_delay=16.0, is_retriable=_is_retriable) 189 def urlretrieve_with_retry(url, filename=None): --> 190 return urllib.request.urlretrieve(url, filename) 191 192 ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in urlretrieve(url, filename, reporthook, data) 246 url_type, path = splittype(url) 247 --> 248 with contextlib.closing(urlopen(url, data)) as fp: 249 headers = fp.info() 250 ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context) 221 else: 222 opener = _opener --> 223 return opener.open(url, data, timeout) 224 225 def install_opener(opener): ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in open(self, fullurl, data, timeout) 524 req = meth(req) 525 --> 526 response = self._open(req, data) 527 528 # post-process response ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in _open(self, req, data) 542 protocol = req.type 543 result = self._call_chain(self.handle_open, protocol, protocol + --> 544 '_open', req) 545 if result: 546 return result ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in _call_chain(self, chain, kind, meth_name, *args) 502 for handler in handlers: 503 func = getattr(handler, meth_name) --> 504 result = func(*args) 505 if result is not None: 506 return result ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in https_open(self, req) 1359 def https_open(self, req): 1360 return self.do_open(http.client.HTTPSConnection, req, -> 1361 context=self._context, check_hostname=self._check_hostname) 1362 1363 https_request = AbstractHTTPHandler.do_request_ ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args) 1318 encode_chunked=req.has_header('Transfer-encoding')) 1319 except OSError as err: # timeout error -> 1320 raise URLError(err) 1321 r = h.getresponse() 1322 except: URLError: <urlopen error [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。> In [ ]:

关于无法启动Bert中文向量训练包

我在启动Bert的中文训练包时,遇到这个问题不知道如何解决 ``` I:VENTILATOR:freeze, optimize and export graph, could take a while... 2020-02-27 21:54:07.408910: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-02-27 21:54:07.414121: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. c:\users\lenovo\appdata\local\programs\python\python37\lib\site-packages\bert_serving\server\helper.py:176: UserWarning: Tensorflow 2.1.0 is not tested! It may or may not work. Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/ 'Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/' % tf.__version__) E:GRAPHOPT:fail to optimize the graph! Traceback (most recent call last): File "c:\users\lenovo\appdata\local\programs\python\python37\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\lenovo\appdata\local\programs\python\python37\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python37\Scripts\bert-serving-start.exe\__main__.py", line 7, in <module> File "c:\users\lenovo\appdata\local\programs\python\python37\lib\site-packages\bert_serving\server\cli\__init__.py", line 4, in main with BertServer(get_run_args()) as server: File "c:\users\lenovo\appdata\local\programs\python\python37\lib\site-packages\bert_serving\server\__init__.py", line 71, in __init__ self.graph_path, self.bert_config = pool.apply(optimize_graph, (self.args,)) TypeError: cannot unpack non-iterable NoneType object ```

急,跪求pycharm跑yolov3-train.py报错

![图片说明](https://img-ask.csdn.net/upload/201905/23/1558616226_449733.png) ``` import numpy as np import keras.backend as K from keras.layers import Input, Lambda from keras.models import Model from keras.callbacks import TensorBoard, ModelCheckpoint, EarlyStopping from yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss from yolo3.utils import get_random_data def _main(): annotation_path = 'train.txt' log_dir = 'logs/000/' classes_path = 'model_data/voc_classes.txt' anchors_path = 'model_data/yolo_anchors.txt' class_names = get_classes(classes_path) anchors = get_anchors(anchors_path) input_shape = (416,416) # multiple of 32, hw model = create_model(input_shape, anchors, len(class_names) ) train(model, annotation_path, input_shape, anchors, len(class_names), log_dir=log_dir) def train(model, annotation_path, input_shape, anchors, num_classes, log_dir='logs/'): model.compile(optimizer='adam', loss={ 'yolo_loss': lambda y_true, y_pred: y_pred}) logging = TensorBoard(log_dir=log_dir) checkpoint = ModelCheckpoint(log_dir + "ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5", monitor='val_loss', save_weights_only=True, save_best_only=True, period=1) batch_size = 8 val_split = 0.1 with open(annotation_path) as f: lines = f.readlines() np.random.shuffle(lines) num_val = int(len(lines)*val_split) num_train = len(lines) - num_val print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size)) model.fit_generator ( data_generator_wrapper ( lines[:num_train] , batch_size , input_shape , anchors , num_classes ) , steps_per_epoch=max ( 1 , num_train // batch_size ) , validation_data=data_generator_wrapper ( lines[num_train:] , batch_size , input_shape , anchors , num_classes ) , validation_steps=max ( 1 , num_val // batch_size ) , epochs=10 , initial_epoch=0 , callbacks=[logging , checkpoint] ) model.save_weights(log_dir + 'trained_weights.h5') def get_classes(classes_path): with open(classes_path) as f: class_names = f.readlines() class_names = [c.strip() for c in class_names] return class_names def get_anchors(anchors_path): with open(anchors_path) as f: anchors = f.readline() anchors = [float(x) for x in anchors.split(',')] return np.array(anchors).reshape(-1, 2) def create_model(input_shape, anchors, num_classes, load_pretrained=False, freeze_body=False, weights_path='model_data/yolo_weights.h5'): K.clear_session() # get a new session h, w = input_shape image_input = Input(shape=(w, h, 3)) num_anchors = len(anchors) y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], num_anchors//3, num_classes+5)) for l in range(3)] model_body = yolo_body(image_input, num_anchors//3, num_classes) print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes)) if load_pretrained: model_body.load_weights(weights_path, by_name=True, skip_mismatch=True) print('Load weights {}.'.format(weights_path)) if freeze_body in [1, 2]: # Do not freeze 3 output layers. num = (185 , len ( model_body.layers ) - 3)[freeze_body - 1] for i in range(num): model_body.layers[i].trainable = False print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers))) model_loss = Lambda ( yolo_loss , output_shape=(1 ,) , name='yolo_loss', arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5} )(model_body.output + y_true) model = Model(inputs=[model_body.input] + y_true, outputs=model_loss) return model def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes): n = len(annotation_lines) i = 0 while True: image_data = [] box_data = [] for b in range(batch_size): if i==0: np.random.shuffle(annotation_lines) image, box = get_random_data(annotation_lines[i], input_shape, random=True) image_data.append(image) box_data.append(box) i = (i+1) % n image_data = np.array(image_data) box_data = np.array(box_data) y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes) yield [image_data]+y_true, np.zeros(batch_size) def data_generator_wrapper(annotation_lines, batch_size, input_shape, anchors, num_classes): n = len(annotation_lines) if n==0 or batch_size<=0: return None return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes) if __name__ == '__main__': _main() ``` 报了一个:tensorflow.python.framework.errors_impl.InvalidArgumentError: Inputs to operation training/Adam/gradients/AddN_24 of type _MklAddN must have the same size and shape. Input 0: [2768896] != input 1: [8,26,26,512] [[Node: training/Adam/gradients/AddN_24 = _MklAddN[N=2, T=DT_FLOAT, _kernel="MklOp", _device="/job:localhost/replica:0/task:0/device:CPU:0"](training/Adam/gradients/batch_normalization_65/FusedBatchNorm_grad/FusedBatchNormGrad, training/Adam/gradients/batch_n

提问:测试Tensorflow object detection API,然后就出问题了?

# AttributeError: module 'tensorflow.python.keras' has no attribute 'Model' ![图片说明](https://img-ask.csdn.net/upload/201901/19/1547905831_372525.png) 大神们帮我看看怎么弄

不同数据集训练同一个CNN网络报错TypeError,如何解决?

最近在自己试着运行这个Flood-filling Networks算法,是全脑图像分割算法,基于CNN,使用Tensorflow,先附上链接: https://github.com/google/ffn 作者给的用于训练推理的一个样本数据集是FIB-25数据集,具体来说就是果蝇的切片脑图像,用于训练的数据集的维度是520\*520\*520(灰度矩阵和标签都是这个维度),我使用作者给的这个数据集运行十分顺利,但是在换用另一个数据集:CREMI数据集(也是果蝇的脑切片,维度是125\*1250\*1250)进行网络的训练的时候,报错如下: ``` Traceback (most recent call last): File "train.py", line 739, in <module> app.run(main) File "/home/.local/lib/python3.6/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/home/.local/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "train.py", line 730, in main **json.loads(FLAGS.model_args)) File "train.py", line 624, in train_ffn load_data_ops = define_data_input(model, queue_batch=1) File "train.py", line 412, in define_data_input 0])) File "/home/.local/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 2649, in equal "Equal", x=x, y=y, name=name) File "/home/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 609, in _apply_op_helper param_name=input_name) File "/home/.local/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 60, in _SatisfiesTypeConstraint ", ".join(dtypes.as_dtype(x).name for x in allowed_list))) TypeError: Value passed to parameter 'x' has DataType uint64 not in list of allowed values: bfloat16, float16, float32, float64, uint8, int8, int16, int32, int64, complex64, quint8, qint8, qint32, string, bool, complex128 ``` 看上去问题的核心在于train.py中的 ``` **json.loads(FLAGS.model_args)) ``` 这一行,但是这里的参数FLAGS.model_args是在训练模型的时候给定的参数,不管使用哪个数据集都是一样的,我把运行训练文件的代码贴过来: ``` python train.py \ --train_coords third_party/neuroproof_examples/validation_sample/tf_record_file/tf_record.tfrecords \ #在之前的运行中生成的tfrecords文件 --data_volumes validation1:third_party/neuroproof_examples/validation_sample/grayscale_maps.h5:raw \ #灰度矩阵 --label_volumes validation1:third_party/neuroproof_examples/validation_sample/groundtruth.h5:stack \ #标签矩阵 --model_name convstack_3d.ConvStack3DFFNModel \ --model_args '{"depth": 12, "fov_size": [33, 33, 33], "deltas": [8, 8, 8]}' \ #模型参数 --image_mean 128 \ --image_stddev 33 ``` 我怎么也想不通为什么对不同的数据集,json在解码相同的model_args时会报错……?是因为CREMI数据集的三个维数不一致,我依然沿用相同的参数导致的吗(FIB-25数据集维度是520\*520\*520,CREMI数据集维度是125\*1250\*1250)? 有些茫然,感谢大家的帮助。

WIN10环境object_detection api训练时报错:Windows fatal exception: access violation

报错内容: Windows fatal exception: access violation Current thread 0x00000e40 (most recent call first): File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\lib\io\file_io.py", line 84 in _preread_check File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\lib\io\file_io.py", line 122 in read File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\utils\label_map_util.py", line 138 in load_labelmap File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\utils\label_map_util.py", line 169 in get_label_map_dict File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\data_decoders\tf_example_decoder.py", line 64 in __init__ File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\data_decoders\tf_example_decoder.py", line 319 in __init__ File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py", line 130 in build File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\inputs.py", line 579 in train_input File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\inputs.py", line 476 in _train_input_fn File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1116 in _call_input_fn File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1025 in _get_features_and_labels_from_input_fn File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1188 in _train_model_default File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1161 in _train_model File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 370 in train File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 714 in run_local File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 613 in run File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\training.py", line 473 in train_and_evaluate File ".\object_detection\model_main.py", line 105 in main File "C:\ProgramData\Anaconda3\lib\site-packages\absl\app.py", line 250 in _run_main File "C:\ProgramData\Anaconda3\lib\site-packages\absl\app.py", line 299 in run File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow_core\python\platform\app.py", line 40 in run File ".\object_detection\model_main.py", line 109 in <module>

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

再不跳槽,应届毕业生拿的都比我多了!

跳槽几乎是每个人职业生涯的一部分,很多HR说“三年两跳”已经是一个跳槽频繁与否的阈值了,可为什么市面上有很多程序员不到一年就跳槽呢?他们不担心影响履历吗? PayScale之前发布的**《员工最短任期公司排行榜》中,两家码农大厂Amazon和Google**,以1年和1.1年的员工任期中位数分列第二、第四名。 PayScale:员工最短任期公司排行榜 意外的是,任期中位数极小的这两家公司,薪资...

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

数据库——玩转SQL语句(以MySQL为例)

一、前言 照着大学的SQL server来学

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

华为初面+综合面试(Java技术面)附上面试题

华为面试整体流程大致分为笔试,性格测试,面试,综合面试,回学校等结果。笔试来说,华为的难度较中等,选择题难度和网易腾讯差不多。最后的代码题,相比下来就简单很多,一共3道题目,前2题很容易就AC,题目已经记不太清楚,不过难度确实不大。最后一题最后提交的代码过了75%的样例,一直没有发现剩下的25%可能存在什么坑。 笔试部分太久远,我就不怎么回忆了。直接将面试。 面试 如果说腾讯的面试是挥金如土...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

程序员写出这样的代码,能不挨骂吗?

当你换槽填坑时,面对一个新的环境。能够快速熟练,上手实现业务需求是关键。但是,哪些因素会影响你快速上手呢?是原有代码写的不够好?还是注释写的不够好?昨夜...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

HTTP与HTTPS的区别

面试官问HTTP与HTTPS的区别,我这样回答让他竖起大拇指!

程序员毕业去大公司好还是小公司好?

虽然大公司并不是人人都能进,但我仍建议还未毕业的同学,尽力地通过校招向大公司挤,但凡挤进去,你这一生会容易很多。 大公司哪里好?没能进大公司怎么办?答案都在这里了,记得帮我点赞哦。 目录: 技术氛围 内部晋升与跳槽 啥也没学会,公司倒闭了? 不同的人脉圈,注定会有不同的结果 没能去大厂怎么办? 一、技术氛围 纵观整个程序员技术领域,哪个在行业有所名气的大牛,不是在大厂? 而且众所...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

终于懂了TCP和UDP协议区别

终于懂了TCP和UDP协议区别

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

01、Java入门(Getting Started);02、集成开发环境(IDE);03、项目结构(Eclipse JavaProject);04、类和对象(Classes and Objects);05:词法结构(Lexical Structure);06:数据类型和变量(Data Type and Variables);07:运算符(Operators);08:控制流程语句(Control Flow Statements);

Java岗开发3年,公司临时抽查算法,离职后这几题我记一辈子

前几天我们公司做了一件蠢事,非常非常愚蠢的事情。我原以为从学校出来之后,除了找工作有测试外,不会有任何与考试有关的事儿。 但是,天有不测风云,公司技术总监、人事总监两位大佬突然降临到我们事业线,叫上我老大,给我们组织了一场别开生面的“考试”。 那是一个风和日丽的下午,我翘着二郎腿,左手端着一杯卡布奇诺,右手抓着我的罗技鼠标,滚动着轮轴,穿梭在头条热点之间。 “淡黄的长裙~蓬松的头发...

大牛都会用的IDEA调试技巧!!!

导读 前天面试了一个985高校的实习生,问了他平时用什么开发工具,他想也没想的说IDEA,于是我抛砖引玉的问了一下IDEA的调试用过吧,你说说怎么设置断点...

立即提问
相关内容推荐