在训练Tensorflow模型(object_detection)时,训练在第一次评估后退出,怎么使训练继续下去? 5C

当我进行ssd模型训练时,训练进行了10分钟,然后进入评估阶段,评估之后程序就自动退出了,没有看到误和警告,这是为什么,怎么让程序一直训练下去?

训练命令:

python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr

配置文件:

training exit after the first evaluation(only one evaluation) in Tensorflow model(object_detection) without error and waring

System information

What is the top-level directory of the model you are using:models/research/object_detection/
Have I written custom code (as opposed to using a stock example script provided in TensorFlow):NO
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Windows-10(64bit)
TensorFlow installed from (source or binary):conda install tensorflow-gpu
TensorFlow version (use command below):1.13.1
Bazel version (if compiling from source):N/A
CUDA/cuDNN version:cudnn-7.6.0
GPU model and memory:GeForce GTX 1060 6GB
Exact command to reproduce:See below
my command for training :

python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr
This is my config :

train_config {
batch_size: 24
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
ssd_random_crop {
}
}
optimizer {
rms_prop_optimizer {
learning_rate {
exponential_decay_learning_rate {
initial_learning_rate: 0.00400000018999
decay_steps: 800720
decay_factor: 0.949999988079
}
}
momentum_optimizer_value: 0.899999976158
decay: 0.899999976158
epsilon: 1.0
}
}
fine_tune_checkpoint: "D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt"
from_detection_checkpoint: true
num_steps: 200000

train_input_reader {
label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt"
tf_record_input_reader {
input_path: "D:/gitcode/models/research/object_detection/idol/train/Iframe_??????.tfrecord"
}
}
eval_config {
num_examples: 8000
max_evals: 10
use_moving_averages: false
}
eval_input_reader {
label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt"
shuffle: false
num_readers: 1
tf_record_input_reader {
input_path: "D:/gitcode/models/research/object_detection/idol/eval/Iframe_??????.tfrecord"
}

窗口输出:
(default) D:\gitcode\models\research>python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:Forced number of epochs for all eval validations to be 1.
WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered eval_on_train_input_config.num_epochs = 0. Overwriting num_epochs to 1.
WARNING:tensorflow:Estimator's model_fn () includes params argument, but params are not passed to Estimator.
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:86: parallel_interleave (from tensorflow.contrib.data.python.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.data.experimental.parallel_interleave(...).
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\core\preprocessor.py:196: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
seed2 arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:158: batch_and_drop_remainder (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.data.Dataset.batch(..., drop_remainder=True).
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\losses\losses_impl.py:448: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\array_grad.py:425: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
2019-08-14 16:29:31.607841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7845
pciBusID: 0000:04:00.0
totalMemory: 6.00GiB freeMemory: 4.97GiB
2019-08-14 16:29:31.621836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-08-14 16:29:32.275712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-14 16:29:32.283072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-08-14 16:29:32.288675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-08-14 16:29:32.293514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1)
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\eval_util.py:796: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\utils\visualization_utils.py:498: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, use
tf.py_function, which takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means tf.py_functions can use accelerators such as GPUs as well as
being differentiable using a gradient tape.

2019-08-14 16:41:44.736212: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-08-14 16:41:44.741242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-14 16:41:44.747522: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-08-14 16:41:44.751256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-08-14 16:41:44.755548: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1)
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\training\saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
creating index...
index created!
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=2.43s).
Accumulating evaluation results...
DONE (t=0.14s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.287
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.529
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.278
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.031
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.312
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.162
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.356
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.356
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.061
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.384
(default) D:\gitcode\models\research>

weixin_45369878
weixin_45369878 请问解决了吗
6 个月之前 回复

1个回答

weixin_45369878
weixin_45369878 回复qq_20160829: 我也这个错误请问解决了吗
6 个月之前 回复
qq_20160829
qq_20160829 这个例子用的是train.py ,我用的是新的脚本model_main.py
6 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
关于Tensorflow Object_Detection API 与 NCS2
Tinker Board上装了 Tensorflow Object_Detection API, 但觉得实时检测的速度 有点慢,想用NCS2 来加速,但是不知道怎么入手,请问各位大神有什么例题教程 推荐的么? 谢谢
安装Tensorflow object detection API之后运行model_builder_test.py报错?
``` Traceback (most recent call last): File "G:\python\models\research\object_detection\builders\model_builder_test.py", line 23, in <module> from object_detection.builders import model_builder File "G:\python\models\research\object_detection\builders\model_builder.py", line 20, in <module> from object_detection.builders import anchor_generator_builder File "G:\python\models\research\object_detection\builders\anchor_generator_builder.py", line 22, in <module> from object_detection.protos import anchor_generator_pb2 File "G:\python\models\research\object_detection\protos\anchor_generator_pb2.py", line 29, in <module> dependencies=[object__detection_dot_protos_dot_flexible__grid__anchor__generator__pb2.DESCRIPTOR,object__detection_dot_protos_dot_grid__anchor__generator__pb2.DESCRIPTOR,object__detection_dot_protos_dot_multiscale__anchor__generator__pb2.DESCRIPTOR,object__detection_dot_protos_dot_ssd__anchor__generator__pb2.DESCRIPTOR,]) File "G:\python\python setup\lib\site-packages\google\protobuf\descriptor.py", line 879, in __new__ return _message.default_pool.AddSerializedFile(serialized_pb) TypeError: Couldn't build proto file into descriptor pool! Invalid proto descriptor for file "object_detection/protos/anchor_generator.proto": object_detection/protos/flexible_grid_anchor_generator.proto: Import "object_detection/protos/flexible_grid_anchor_generator.proto" has not been loaded. object_detection/protos/multiscale_anchor_generator.proto: Import "object_detection/protos/multiscale_anchor_generator.proto" has not been loaded. object_detection.protos.AnchorGenerator.multiscale_anchor_generator: "object_detection.protos.MultiscaleAnchorGenerator" seems to be defined in "protos/multiscale_anchor_generator.proto", which is not imported by "object_detection/protos/anchor_generator.proto". To use it here, please add the necessary import. object_detection.protos.AnchorGenerator.flexible_grid_anchor_generator: "object_detection.protos.FlexibleGridAnchorGenerator" seems to be defined in "protos/flexible_grid_anchor_generator.proto", which is not imported by "object_detection/protos/anchor_generator.proto". To use it here, please add the necessary import. ``` 网上找了各种方法都没用,有些可能有用的但是不够详细。
在使用Tensorflow object_detection API训练自己的数据集时,无法将自己的数据转换为tfrecord文件,但是给的VOC2012可以转换,老板给的任务,急。。。?
win10, Tensorflow-gpu==1.14.0, python3.7 按照要求已将相关文件放至相应位置: ![图片说明](https://img-ask.csdn.net/upload/201911/24/1574584428_312195.png) 数据集准备: ![图片说明](https://img-ask.csdn.net/upload/201911/24/1574584780_579019.png) 标签准备: ![图片说明](https://img-ask.csdn.net/upload/201911/24/1574584792_727796.png) xml文件格式: ![图片说明](https://img-ask.csdn.net/upload/201911/24/1574584811_141648.png) imageset文件准备: ![图片说明](https://img-ask.csdn.net/upload/201911/24/1574584825_425924.png) train.txt文件: ![图片说明](https://img-ask.csdn.net/upload/201911/24/1574584847_363800.png) pascal_label_map也改了:我的只有这两个标签 ![图片说明](https://img-ask.csdn.net/upload/201911/24/1574584895_190927.png) 命令行运行create_pascal_tf_record文件结果: ![图片说明](https://img-ask.csdn.net/upload/201911/24/1574585056_500783.png) 运行完之后只生成一个0kb的record文件 有人说降级到tf1.12.0也许可以,但是现在已经下载不到这个版本了 而且这个程序是可以转换VOC2012的数据集的,也许不是版本问题,卡了三天啦,求大佬指导【抱拳】
Tensorflow object detection API 训练自己数据时报错 Windows fatal exception: access violation
python3.6, tf 1.14.0,Tensorflow object detection API 跑demo图片和改为摄像头进行物体识别均正常, 训练自己的数据训练自己数据时报错 Windows fatal exception: access violation 用的ssd_mobilenet_v1_coco_2018_01_28模型, 命令:python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr 其实就是按照网上基础的训练来的,一直报这个,具体错误输出如下: (py36) D:\pythonpro\TensorFlowLearn\face_tf_model>python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr WARNING: Logging before flag parsing goes to stderr. W0622 16:50:30.230578 14180 lazy_loader.py:50] The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue. W0622 16:50:30.317274 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\inception_resnet_v2.py:373: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead. W0622 16:50:30.355400 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\mobilenet\mobilenet.py:397: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead. W0622 16:50:30.388313 14180 deprecation_wrapper.py:119] From model_main.py:109: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead. W0622 16:50:30.397290 14180 deprecation_wrapper.py:119] From D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py:98: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead. Windows fatal exception: access violation Current thread 0x00003764 (most recent call first): File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 84 in _preread_check File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 122 in read File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py", line 99 in get_configs_from_pipeline_file File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\model_lib.py", line 606 in create_estimator_and_inputs File "model_main.py", line 71 in main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 251 in _run_main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 300 in run File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\platform\app.py", line 40 in run File "model_main.py", line 109 in <module> (py36) D:\pythonpro\TensorFlowLearn\face_tf_model> 请大神指点下
在jupyter notebook上运行tensorflow目标识别官方测试代码object_detection_tutorial.ipynb,每次都是最后一个模块运行时出现“服务器挂了”,如何解决?
在annaconda中创建了tensorflow-gpu的环境,代码可以跑通,没有报错,但是每次到最后一块检测test_image 的时候就服务器挂了。 创建tensorflowcpu环境可以正常跑下来(最后显示那个输出结果),请问是为什么?如何解决呢? 对该环境用代码测试过,pycharm里,可以显示应用的显卡信息,算力等信息,应该是没有问题的。
用tensorflow-gpu跑SSD-Mobilenet模型隔一段时间就会出现以下内容
![图片说明](https://img-ask.csdn.net/upload/201903/16/1552739746_369680.png) 我用的以下命令python object____detection/train.py --train_dir object_detection/train --pipeline_config_path object__detection/ssd__model/ssd_mobilenet_v1_pets.config___ 然后在object_detection 目录下没有见到train文件夹 这正常吗,我之前用CPU跑的时候很快就创建了train文件夹
ImportError:cannot import name 'cloud' from 'tensorflow.contrib'求助
使用Tensorflow Object_Detection, 配好了Tensorflow 1.14.0 和 Protocobuf 3.10.0 然后路径也配好了,就是运行测试文件时会报错 ImportError:cannot import name 'cloud' from 'tensorflow.contrib' 请问各位大神,这是缺少了什么库?
Tensorflow object detection api 训练自己数据 map一直是 -1
使用Tensorflow object detection api 训练自己的数据 map 一直是-1.loss一直也很低。 结果是这样的: ![图片说明](https://img-ask.csdn.net/upload/201910/14/1571048294_159662.png) loss: ![图片说明](https://img-ask.csdn.net/upload/201910/15/1571105856_282377.jpg) 使用的模型是:model zoo的这个![图片说明](https://img-ask.csdn.net/upload/201910/14/1571047856_17157.jpg) piplineConfig 如下: ``` model { faster_rcnn { num_classes: 25 image_resizer { keep_aspect_ratio_resizer { min_dimension: 720 max_dimension: 1280 } } feature_extractor { type: "faster_rcnn_resnet50" first_stage_features_stride: 16 } first_stage_anchor_generator { grid_anchor_generator { height_stride: 16 width_stride: 16 scales: 0.25 scales: 0.5 scales: 1.0 scales: 2.0 aspect_ratios: 0.5 aspect_ratios: 1.0 aspect_ratios: 2.0 } } first_stage_box_predictor_conv_hyperparams { op: CONV regularizer { l2_regularizer { weight: 0.0 } } initializer { truncated_normal_initializer { stddev: 0.00999999977648 } } } first_stage_nms_score_threshold: 0.0 first_stage_nms_iou_threshold: 0.699999988079 first_stage_max_proposals: 100 first_stage_localization_loss_weight: 2.0 first_stage_objectness_loss_weight: 1.0 initial_crop_size: 14 maxpool_kernel_size: 2 maxpool_stride: 2 second_stage_box_predictor { mask_rcnn_box_predictor { fc_hyperparams { op: FC regularizer { l2_regularizer { weight: 0.0 } } initializer { variance_scaling_initializer { factor: 1.0 uniform: true mode: FAN_AVG } } } use_dropout: false dropout_keep_probability: 1.0 } } second_stage_post_processing { batch_non_max_suppression { score_threshold: 0.300000011921 iou_threshold: 0.600000023842 max_detections_per_class: 100 max_total_detections: 100 } score_converter: SOFTMAX } second_stage_localization_loss_weight: 2.0 second_stage_classification_loss_weight: 1.0 } } train_config { batch_size: 1 data_augmentation_options { random_horizontal_flip { } } optimizer { momentum_optimizer { learning_rate { manual_step_learning_rate { initial_learning_rate: 0.000300000014249 schedule { step: 900000 learning_rate: 2.99999992421e-05 } schedule { step: 1200000 learning_rate: 3.00000010611e-06 } } } momentum_optimizer_value: 0.899999976158 } use_moving_average: false } gradient_clipping_by_norm: 10.0 fine_tune_checkpoint: "/home/yons/code/自动驾驶视觉综合感知/faster_rcnn_resnet50_coco_2018_01_28/model.ckpt" from_detection_checkpoint: true num_steps: 200000 } train_input_reader { label_map_path: "/home/yons/code/自动驾驶视觉综合感知/pascal_label_map.pbtxt" tf_record_input_reader { input_path: "/home/yons/data/自动驾驶视觉综合感知/train_dataset/tfRecord/train/coco_train.record" } } eval_config { num_examples: 200 max_evals: 10 use_moving_averages: false metrics_set: "coco_detection_metrics" } eval_input_reader { label_map_path: "/home/yons/code/自动驾驶视觉综合感知/pascal_label_map.pbtxt" shuffle: false num_readers: 1 tf_record_input_reader { input_path: "/home/yons/data/自动驾驶视觉综合感知/train_dataset/tfRecord/val/coco_val.record" } } ``` label_map配置: ``` item { id: 1 name: 'red' } item { id: 2 name: 'green' } item { id: 3 name: 'yellow' } item { id: 4 name: 'red_left' } item { id: 5 name: 'red_right' } item { id: 6 name: 'yellow_left' } item { id: 7 name: 'yellow_right' } item { id: 8 name: 'green_left' } item { id: 9 name: 'green_right' } item { id: 10 name: 'red_forward' } item { id: 11 name: 'green_forward' } item { id: 12 name: 'yellow_forward' } item { id:13 name: 'horizon_red' } item { id: 14 name: 'horizon_green' } item { id: 15 name: 'horizon_yellow' } item { id: 16 name: 'off' } item { id: 17 name: 'traffic_sign' } item { id: 18 name: 'car' } item { id: 19 name: 'motor' } item { id: 20 name: 'bike' } item { id: 21 name: 'bus' } item { id: 22 name: 'truck' } item { id: 23 name: 'suv' } item { id: 24 name: 'express' } item { id: 25 name: 'person' } ``` 自己解析数据tfrecord: ![图片说明](https://img-ask.csdn.net/upload/201910/14/1571048123_764545.png) ![图片说明](https://img-ask.csdn.net/upload/201910/14/1571048152_258259.png)
Tensorflow object detection api 训练在Ubuntu
我在windows上做出了record文件,把config,pbtxt和record传到了Ubuntu进行训练,做出来的pb没有效果,结果没有任何框,这是为什么呢?训练的时候record文件不足够吗,还需要数据集图片?谢谢帮我一下
用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神
用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神 执行语句是: ``` python model_main.py --model_dir=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training --pipeline_config_path=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training/mask_rcnn_inception_v2_coco.config ``` 报错信息如下: ``` WARNING:tensorflow:Forced number of epochs for all eval validations to be 1. WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1. WARNING:tensorflow:Estimator's model_fn (<function create_model_fn.<locals>.model_fn at 0x000001C1EA335C80>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards. Traceback (most recent call last): File "model_main.py", line 109, in <module> tf.app.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run _sys.exit(main(argv)) File "model_main.py", line 105, in main tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0]) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 439, in train_and_evaluate executor.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 518, in run self.run_local() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 650, in run_local hooks=train_hooks) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 363, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 843, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 853, in _train_model_default input_fn, model_fn_lib.ModeKeys.TRAIN)) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 691, in _get_features_and_labels_from_input_fn result = self._call_input_fn(input_fn, mode) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 798, in _call_input_fn return input_fn(**kwargs) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 525, in _train_input_fn batch_size=params['batch_size'] if params else train_config.batch_size) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 149, in build dataset = data_map_fn(process_fn, num_parallel_calls=num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 853, in map return ParallelMapDataset(self, map_func, num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1870, in __init__ super(ParallelMapDataset, self).__init__(input_dataset, map_func) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1839, in __init__ self._map_func.add_to_graph(ops.get_default_graph()) File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 484, in add_to_graph self._create_definition_if_needed() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 319, in _create_definition_if_needed self._create_definition_if_needed_impl() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 336, in _create_definition_if_needed_impl outputs = self._func(*inputs) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1804, in tf_map_func ret = map_func(nested_args) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 130, in process_fn processed_tensors = transform_input_data_fn(processed_tensors) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 515, in transform_and_pad_input_data_fn tensor_dict=transform_data_fn(tensor_dict), File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 129, in transform_input_data tf.expand_dims(tf.to_float(image), axis=0)) File "D:\Tensorflow\tf\models\research\object_detection\meta_architectures\faster_rcnn_meta_arch.py", line 543, in preprocess parallel_iterations=self._parallel_iterations) File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in static_or_dynamic_map_fn outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in <listcomp> outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2264, in resize_to_range lambda: _resize_portrait_image(image)) File "E:\Python3.6\lib\site-packages\tensorflow\python\util\deprecation.py", line 432, in new_func return func(*args, **kwargs) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2063, in cond orig_res_t, res_t = context_t.BuildCondBranch(true_fn) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 1913, in BuildCondBranch original_result = fn() File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2263, in <lambda> lambda: _resize_landscape_image(image), File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2245, in _resize_landscape_image align_corners=align_corners, preserve_aspect_ratio=True) TypeError: resize_images() got an unexpected keyword argument 'preserve_aspect_ratio' ``` 根据提示的最后一句,是说没有一个有效参数 我用的是TensorFlow1.8 python3.6,下载的最新的TensorFlow-models-master
关于object detection运行视频检测代码出现报错:ValueError:assignment destination is read-only
我参考博主 withzheng的博客:https://blog.csdn.net/xiaoxiao123jun/article/details/76605928 在视频物体识别的部分中,我用的是Anaconda自带的spyder(python3.6)来运行他给的视频检测代码,出现了如下报错,![图片说明](https://img-ask.csdn.net/upload/201904/20/1555752185_448895.jpg) 具体报错: Moviepy - Building video video1_out.mp4. Moviepy - Writing video video1_out.mp4 t: 7%|▋ | 7/96 [00:40<09:17, 6.26s/it, now=None]Traceback (most recent call last): File "", line 1, in runfile('C:/models-master1/research/object_detection/object_detection_tutorial (1).py', wdir='C:/models-master1/research/object_detection') File "C:\Users\Administrator\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile execfile(filename, namespace) File "C:\Users\Administrator\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile exec(compile(f.read(),filename,'exec'), namespace) File "C:/models-master1/research/object_detection/object_detection_tutorial (1).py", line 273, in white_clip.write_videofile(white_output, audio=False) File "", line 2, in write_videofile File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 54, in requires_duration return f(clip, *a, **k) File "", line 2, in write_videofile File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 137, in use_clip_fps_by_default return f(clip, *new_a, **new_kw) File "", line 2, in write_videofile File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 22, in convert_masks_to_RGB return f(clip, *a, **k) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\video\VideoClip.py", line 326, in write_videofile logger=logger) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\video\io\ffmpeg_writer.py", line 216, in ffmpeg_write_video fps=fps, dtype="uint8"): File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\Clip.py", line 475, in iter_frames frame = self.get_frame(t) File "", line 2, in get_frame File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\decorators.py", line 89, in wrapper return f(*new_a, **new_kw) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\Clip.py", line 95, in get_frame return self.make_frame(t) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\Clip.py", line 138, in newclip = self.set_make_frame(lambda t: fun(self.get_frame, t)) File "C:\Users\Administrator\Anaconda3\lib\site-packages\moviepy\video\VideoClip.py", line 511, in return self.fl(lambda gf, t: image_func(gf(t)), apply_to) File "C:/models-master1/research/object_detection/object_detection_tutorial (1).py", line 267, in process_image image_process=detect_objects(image,sess,detection_graph) File "C:/models-master1/research/object_detection/object_detection_tutorial (1).py", line 258, in detect_objects line_thickness=8) File "C:\models-master1\research\object_detection\utils\visualization_utils.py", line 743, in visualize_boxes_and_labels_on_image_array use_normalized_coordinates=use_normalized_coordinates) File "C:\models-master1\research\object_detection\utils\visualization_utils.py", line 129, in draw_bounding_box_on_image_array np.copyto(image, np.array(image_pil)) ValueError: assignment destination is read-only 想问问各位大神有遇到过类似的问题吗。。如何解决?
Tensorflow object-detection api 报错
我尝试使用ssd_mobilenet_v1模型,报错TypeError: `pred` must be a Tensor, or a Python bool, or 1 or 0. Found instead: None 不知道是什么原因引起的,is_training改成true的方法我已经试过了,没有用
TensorFlow Object Detection API 训练过程相关问题?
``` 2019-03-22 11:47:37.264972: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:08:00.0, compute capability: 6.1) ``` Tensorflow能正常值用。 启动model_main().py后卡在这不动、相关文件夹中没有生成.ckpt文件。 ![图片说明](https://img-ask.csdn.net/upload/201903/22/1553233226_55747.jpg) 是我显卡太垃圾、计算慢还是其他原因啊???求大神。
C指针问题,(const object_def_t**)obj原理
下面代码中的 (*(const object_def_t **) obj) = def; 这一句是干嘛用的? obj是一个指针,(const object_def_t**)obj是什么意思? ``` #define DECLARE_OBJECT \ const void* __def__; \ volatile long refCount typedef struct object_header_s { DECLARE_OBJECT; } object_header_t; object_t *object_new(const object_def_t *def) { object_t *obj = malloc(def->size); if (obj) { (*(const object_def_t **) obj) = def; OBJECT_HEADER(obj)->refCount = 1; if (def->constructor){ obj=def->constructor(obj); } } return obj; } ```
Tensorflow+GPU做物体检测,CPU和内存都高占用?
如题, 我在用Tensorflow Object Detection做物体检测的时候, 用mobilenetV1模型, 然后在session运行的时候发现占用的CPU很高, i7的占到了80%, 很不解用到CPU做了什么, 请大神解答...
DenseFeature作为函数式API的第一层时报AttributeError: 'DenseFeatures' object has no attribute 'shape';该怎么解决啊。
我在用TensorFlow2.0搭建一个简单的全连接网络,第一层我设计的是一个DenseFeature,剩下的是三个Dense层,但我运行的时候却提示我AttributeError: 'DenseFeatures' object has no attribute 'shape'; 代码如下: ``` feature_layer = tf.keras.layers.DenseFeatures(one_order_feature_columns) dense1 = tf.keras.layers.Dense(128, activation='relu')(feature_layer) dense2 = tf.keras.layers.Dense(128, activation='relu')(dense1) dense3 = tf.keras.layers.Dense(1, activation='sigmoid')(dense2) model = tf.keras.Model(inputs=[feature_layer], outputs=dense3) # model = tf.keras.Sequential([ # tf.keras.layers.DenseFeatures(one_order_feature_columns), # tf.keras.layers.Dense(128, activation='relu'), # tf.keras.layers.Dense(128, activation='relu'), # tf.keras.layers.Dense(1, activation='sigmoid') # ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.fit(train_ds, epochs=5) ``` 我也尝试直接使用Sequential容器来搭建模型(代码中的注释部分),模型能够跑通。但使用函数式API时却不行。我是在哪出错了吗?
tensorflow载入训练好的模型进行预测,同一张图片预测的结果却不一样????
最近在跑deeplabv1,在测试代码的时候,跑通了训练程序,但是用训练好的模型进行与测试却发现相同的图片预测的结果不一样??请问有大神知道怎么回事吗? 用的是saver.restore()方法载入模型。代码如下: ``` def main(): """Create the model and start the inference process.""" args = get_arguments() # Prepare image. img = tf.image.decode_jpeg(tf.read_file(args.img_path), channels=3) # Convert RGB to BGR. img_r, img_g, img_b = tf.split(value=img, num_or_size_splits=3, axis=2) img = tf.cast(tf.concat(axis=2, values=[img_b, img_g, img_r]), dtype=tf.float32) # Extract mean. img -= IMG_MEAN # Create network. net = DeepLabLFOVModel() # Which variables to load. trainable = tf.trainable_variables() # Predictions. pred = net.preds(tf.expand_dims(img, dim=0)) # Set up TF session and initialize variables. config = tf.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config) #init = tf.global_variables_initializer() sess.run(tf.global_variables_initializer()) # Load weights. saver = tf.train.Saver(var_list=trainable) load(saver, sess, args.model_weights) # Perform inference. preds = sess.run([pred]) print(preds) if not os.path.exists(args.save_dir): os.makedirs(args.save_dir) msk = decode_labels(np.array(preds)[0, 0, :, :, 0]) im = Image.fromarray(msk) im.save(args.save_dir + 'mask1.png') print('The output file has been saved to {}'.format( args.save_dir + 'mask.png')) if __name__ == '__main__': main() ``` 其中load是 ``` def load(saver, sess, ckpt_path): '''Load trained weights. Args: saver: TensorFlow saver object. sess: TensorFlow session. ckpt_path: path to checkpoint file with parameters. ''' ckpt = tf.train.get_checkpoint_state(ckpt_path) if ckpt and ckpt.model_checkpoint_path: saver.restore(sess, ckpt.model_checkpoint_path) print("Restored model parameters from {}".format(ckpt_path)) ``` DeepLabLFOVMode类如下: ``` class DeepLabLFOVModel(object): """DeepLab-LargeFOV model with atrous convolution and bilinear upsampling. This class implements a multi-layer convolutional neural network for semantic image segmentation task. This is the same as the model described in this paper: https://arxiv.org/abs/1412.7062 - please look there for details. """ def __init__(self, weights_path=None): """Create the model. Args: weights_path: the path to the cpkt file with dictionary of weights from .caffemodel. """ self.variables = self._create_variables(weights_path) def _create_variables(self, weights_path): """Create all variables used by the network. This allows to share them between multiple calls to the loss function. Args: weights_path: the path to the ckpt file with dictionary of weights from .caffemodel. If none, initialise all variables randomly. Returns: A dictionary with all variables. """ var = list() index = 0 if weights_path is not None: with open(weights_path, "rb") as f: weights = cPickle.load(f) # Load pre-trained weights. for name, shape in net_skeleton: var.append(tf.Variable(weights[name], name=name)) del weights else: # Initialise all weights randomly with the Xavier scheme, # and # all biases to 0's. for name, shape in net_skeleton: if "/w" in name: # Weight filter. w = create_variable(name, list(shape)) var.append(w) else: b = create_bias_variable(name, list(shape)) var.append(b) return var def _create_network(self, input_batch, keep_prob): """Construct DeepLab-LargeFOV network. Args: input_batch: batch of pre-processed images. keep_prob: probability of keeping neurons intact. Returns: A downsampled segmentation mask. """ current = input_batch v_idx = 0 # Index variable. # Last block is the classification layer. for b_idx in xrange(len(dilations) - 1): for l_idx, dilation in enumerate(dilations[b_idx]): w = self.variables[v_idx * 2] b = self.variables[v_idx * 2 + 1] if dilation == 1: conv = tf.nn.conv2d(current, w, strides=[ 1, 1, 1, 1], padding='SAME') else: conv = tf.nn.atrous_conv2d( current, w, dilation, padding='SAME') current = tf.nn.relu(tf.nn.bias_add(conv, b)) v_idx += 1 # Optional pooling and dropout after each block. if b_idx < 3: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 2, 2, 1], padding='SAME') elif b_idx == 3: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') elif b_idx == 4: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') current = tf.nn.avg_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') elif b_idx <= 6: current = tf.nn.dropout(current, keep_prob=keep_prob) # Classification layer; no ReLU. # w = self.variables[v_idx * 2] w = create_variable(name='w', shape=[1, 1, 1024, n_classes]) # b = self.variables[v_idx * 2 + 1] b = create_bias_variable(name='b', shape=[n_classes]) conv = tf.nn.conv2d(current, w, strides=[1, 1, 1, 1], padding='SAME') current = tf.nn.bias_add(conv, b) return current def prepare_label(self, input_batch, new_size): """Resize masks and perform one-hot encoding. Args: input_batch: input tensor of shape [batch_size H W 1]. new_size: a tensor with new height and width. Returns: Outputs a tensor of shape [batch_size h w 18] with last dimension comprised of 0's and 1's only. """ with tf.name_scope('label_encode'): # As labels are integer numbers, need to use NN interp. input_batch = tf.image.resize_nearest_neighbor( input_batch, new_size) # Reducing the channel dimension. input_batch = tf.squeeze(input_batch, squeeze_dims=[3]) input_batch = tf.one_hot(input_batch, depth=n_classes) return input_batch def preds(self, input_batch): """Create the network and run inference on the input batch. Args: input_batch: batch of pre-processed images. Returns: Argmax over the predictions of the network of the same shape as the input. """ raw_output = self._create_network( tf.cast(input_batch, tf.float32), keep_prob=tf.constant(1.0)) raw_output = tf.image.resize_bilinear( raw_output, tf.shape(input_batch)[1:3, ]) raw_output = tf.argmax(raw_output, dimension=3) raw_output = tf.expand_dims(raw_output, dim=3) # Create 4D-tensor. return tf.cast(raw_output, tf.uint8) def loss(self, img_batch, label_batch): """Create the network, run inference on the input batch and compute loss. Args: input_batch: batch of pre-processed images. Returns: Pixel-wise softmax loss. """ raw_output = self._create_network( tf.cast(img_batch, tf.float32), keep_prob=tf.constant(0.5)) prediction = tf.reshape(raw_output, [-1, n_classes]) # Need to resize labels and convert using one-hot encoding. label_batch = self.prepare_label( label_batch, tf.stack(raw_output.get_shape()[1:3])) gt = tf.reshape(label_batch, [-1, n_classes]) # Pixel-wise softmax loss. loss = tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=gt) reduced_loss = tf.reduce_mean(loss) return reduced_loss ``` 按理说载入模型应该没有问题,可是不知道为什么结果却不一样? 图片:![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810836_83106.jpg) ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810850_924663.png) 预测的结果: ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810884_985680.png) ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810904_577649.png) 两次结果不一样,与保存的模型算出来的结果也不一样。 我用的是GitHub上这个人的代码: https://github.com/minar09/DeepLab-LFOV-TensorFlow 急急急,请问有大神知道吗???
Tensorflow object detection API 使用VOC数据集出现错误。
环境:Win7+Anaconda+Python3.6+tensorflow 1.12.0 在进行目标检测时,运行train.py,跳转到array____ops.py在903行出错, ``` if ops.is_dense_tensor_like(elem): if dtype is not None and elem.dtype.base_dtype != dtype: raise TypeError("Cannot convert a list containing a tensor of dtype " "%s to %s (Tensor is: %r)" % (elem.dtype, dtype, elem)) converted_elems.append(elem) must_pack = True elif isinstance(elem, (list, tuple)): converted_elem = _autopacking_helper(elem, dtype, str(i)) if ops.is_dense_tensor_like(converted_elem): must_pack = True converted_elems.append(converted_elem) else: converted_elems.append(elem) ``` 错误显示为: TypeError: Cannot convert a list containing a tensor of dtype <dtype: 'int32'> to <dtype: 'float32'> (Tensor is: <tf.Tensor 'Preprocessor/stack_1:0' shape=(1, 3) dtype=int32>) 有没有遇到相同问题的,怎么解决啊,找了好多,都没有遇到靠谱的。
Tensorflow 源码构建 error executing command 错误
Ubuntu 16.04 通过源码安装tensorflow GPU版本。执行如下命令时出错,请各位帮忙分析下。 OS:Linux aikou 4.10.0-37-generic #41~16.04.1-Ubuntu SMP Fri Oct 6 22:42:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux GCC:gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.5) CUDA:9.0 cuDNN: 5.1.10 执行命令: bazel build --copt=-march=native --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both ––verbose_failures --spawn_strategy=standalone --config=cuda //tensorflow/tools/pip_package:build_pip_package 错误提示: ERROR: /home/kou/tensorflow/tensorflow/stream_executor/BUILD:52:1: C++ compilation of rule '//tensorflow/stream_executor:cuda_platform' failed (Exit 1): crosstool_wrapper_driver_is_not_gcc failed: error executing command (cd /home/kou/.cache/bazel/_bazel_kou/3f3a4712723b62ae321569eb62995c39/execroot/org_tensorflow && \ exec env - \ LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:/usr/local/cuda/extras/CUPTI/lib64 \ PATH=/home/kou/anaconda3/bin:/home/kou/bin:/home/kou/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-9.0/bin \ PWD=/proc/self/cwd \ external/local_config_cuda/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -fPIE -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections -g0 '-std=c++11' -g0 -MD -MF bazel-out/host/bin/tensorflow/stream_executor/_objs/cuda_platform/tensorflow/stream_executor/cuda/cuda_dnn.pic.d '-frandom-seed=bazel-out/host/bin/tensorflow/stream_executor/_objs/cuda_platform/tensorflow/stream_executor/cuda/cuda_dnn.pic.o' -fPIC -DEIGEN_MPL2_ONLY -DTENSORFLOW_USE_JEMALLOC -DTF_USE_SNAPPY -iquote . -iquote bazel-out/host/genfiles -iquote external/nsync -iquote bazel-out/host/genfiles/external/nsync -iquote external/bazel_tools -iquote bazel-out/host/genfiles/external/bazel_tools -iquote external/jemalloc -iquote bazel-out/host/genfiles/external/jemalloc -iquote external/eigen_archive -iquote bazel-out/host/genfiles/external/eigen_archive -iquote external/local_config_sycl -iquote bazel-out/host/genfiles/external/local_config_sycl -iquote external/gif_archive -iquote bazel-out/host/genfiles/external/gif_archive -iquote external/jpeg -iquote bazel-out/host/genfiles/external/jpeg -iquote external/protobuf_archive -iquote bazel-out/host/genfiles/external/protobuf_archive -iquote external/com_googlesource_code_re2 -iquote bazel-out/host/genfiles/external/com_googlesource_code_re2 -iquote external/farmhash_archive -iquote bazel-out/host/genfiles/external/farmhash_archive -iquote external/fft2d -iquote bazel-out/host/genfiles/external/fft2d -iquote external/highwayhash -iquote bazel-out/host/genfiles/external/highwayhash -iquote external/png_archive -iquote bazel-out/host/genfiles/external/png_archive -iquote external/zlib_archive -iquote bazel-out/host/genfiles/external/zlib_archive -iquote external/local_config_cuda -iquote bazel-out/host/genfiles/external/local_config_cuda -isystem external/nsync/public -isystem bazel-out/host/genfiles/external/nsync/public -isystem external/bazel_tools/tools/cpp/gcc3 -isystem external/jemalloc/include -isystem bazel-out/host/genfiles/external/jemalloc/include -isystem external/eigen_archive -isystem bazel-out/host/genfiles/external/eigen_archive -isystem external/gif_archive/lib -isystem bazel-out/host/genfiles/external/gif_archive/lib -isystem external/protobuf_archive/src -isystem bazel-out/host/genfiles/external/protobuf_archive/src -isystem external/farmhash_archive/src -isystem bazel-out/host/genfiles/external/farmhash_archive/src -isystem external/png_archive -isystem bazel-out/host/genfiles/external/png_archive -isystem external/zlib_archive -isystem bazel-out/host/genfiles/external/zlib_archive -isystem external/local_config_cuda/cuda -isystem bazel-out/host/genfiles/external/local_config_cuda/cuda -isystem external/local_config_cuda/cuda/cuda/include -isystem bazel-out/host/genfiles/external/local_config_cuda/cuda/cuda/include -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -fno-canonical-system-headers -c tensorflow/stream_executor/cuda/cuda_dnn.cc -o bazel-out/host/bin/tensorflow/stream_executor/_objs/cuda_platform/tensorflow/stream_executor/cuda/cuda_dnn.pic.o)
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
Linux(服务器编程):15---两种高效的事件处理模式(reactor模式、proactor模式)
前言 同步I/O模型通常用于实现Reactor模式 异步I/O模型则用于实现Proactor模式 最后我们会使用同步I/O方式模拟出Proactor模式 一、Reactor模式 Reactor模式特点 它要求主线程(I/O处理单元)只负责监听文件描述符上是否有事件发生,有的话就立即将时间通知工作线程(逻辑单元)。除此之外,主线程不做任何其他实质性的工作 读写数据,接受新的连接,以及处...
为什么要学数据结构?
一、前言 在可视化化程序设计的今天,借助于集成开发环境可以很快地生成程序,程序设计不再是计算机专业人员的专利。很多人认为,只要掌握几种开发工具就可以成为编程高手,其实,这是一种误解。要想成为一个专业的开发人员,至少需要以下三个条件: 1) 能够熟练地选择和设计各种数据结构和算法 2) 至少要能够熟练地掌握一门程序设计语言 3) 熟知所涉及的相关应用领域的知识 其中,后两个条件比较容易实现,而第一个...
C语言魔塔游戏
很早就很想写这个,今天终于写完了。 游戏截图: 编译环境: VS2017 游戏需要一些图片,如果有想要的或者对游戏有什么看法的可以加我的QQ 2985486630 讨论,如果暂时没有回应,可以在博客下方留言,到时候我会看到。 下面我来介绍一下游戏的主要功能和实现方式 首先是玩家的定义,使用结构体,这个名字是可以自己改变的 struct gamerole { char n
进程通信方式总结与盘点
​ 进程通信是指进程之间的信息交换。这里需要和进程同步做一下区分,进程同步控制多个进程按一定顺序执行,进程通信是一种手段,而进程同步是目标。从某方面来讲,进程通信可以解决进程同步问题。 ​ 首先回顾下我们前面博文中讲到的信号量机制,为了实现进程的互斥与同步,需要在进程间交换一定的信息,因此信号量机制也可以被归为进程通信的一种方式,但是也被称为低级进程通信,主要原因为: 效率低:一次只可操作少量的...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
听说了吗?阿里双11作战室竟1根网线都没有
双11不光是购物狂欢节,更是对技术的一次“大考”,对于阿里巴巴企业内部运营的基础保障技术而言,亦是如此。 回溯双11历史,这背后也经历过“小米加步枪”的阶段:作战室从随处是网线,交换机放地上的“一地狼藉”;到如今媲美5G的wifi网速,到现场却看不到一根网线;从当年使用商用AP(无线路由器),让光明顶双11当天断网一分钟,到全部使用阿里自研AP……阿里巴巴企业智能事业部工程师们提供的基础保障...
在阿里,40岁的奋斗姿势
在阿里,40岁的奋斗姿势 在阿里,什么样的年纪可以称为老呢?35岁? 在云网络,有这样一群人,他们的平均年龄接近40,却刚刚开辟职业生涯的第二战场。 他们的奋斗姿势是什么样的呢? 洛神赋 “翩若惊鸿,婉若游龙。荣曜秋菊,华茂春松。髣髴兮若轻云之蔽月,飘飖兮若流风之回雪。远而望之,皎若太阳升朝霞;迫而察之,灼若芙蕖出渌波。” 爱洛神,爱阿里云 2018年,阿里云网络产品部门启动洛神2.0升...
关于研发效能提升的思考
研发效能提升是最近比较热门的一个话题,本人根据这几年的工作心得,做了一些思考总结,由于个人深度有限,暂且抛转引入。 三要素 任何生产力的提升都离不开这三个因素:人、流程和工具,少了其中任何一个因素都无法实现。 人,即思想,也就是古人说的“道”,道不同不相为谋,是制高点,也是高层建筑的基石。 流程,即方法,也是古人说的“法”。研发效能的提升,也就是要提高投入产出比,既要增加产出,也要减...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆  每天早上8:30推送 作者| Mr.K   编辑| Emma 来源| 技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯
程序员该看的几部电影
##1、骇客帝国(1999) 概念:在线/离线,递归,循环,矩阵等 剧情简介: 不久的将来,网络黑客尼奥对这个看似正常的现实世界产生了怀疑。 他结识了黑客崔妮蒂,并见到了黑客组织的首领墨菲斯。 墨菲斯告诉他,现实世界其实是由一个名叫“母体”的计算机人工智能系统控制,人们就像他们饲养的动物,没有自由和思想,而尼奥就是能够拯救人类的救世主。 可是,救赎之路从来都不会一帆风顺,到底哪里才是真实的世界?
入职阿里5年,他如何破解“技术债”?
简介: 作者 | 都铎 作为一名技术人,你常常会听到这样的话: “先快速上线” “没时间改” “再缓一缓吧” “以后再解决” “先用临时方案处理” …… 当你埋下的坑越来越多,不知道哪天哪位同学就会踩上一颗雷。特别赞同“人最大的恐惧就是未知,当技术债可说不可见的时候,才是最让人不想解决的时候。” 作为一个程序员,我们反对复制粘贴,但是我们经常会见到相似的代码,相同的二方包,甚至整个代码...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布了 2019年国民经济报告 ,报告中指出:年末中国大陆总人口(包括31个
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
2020年的1月,我辞掉了我的第一份工作
其实,这篇文章,我应该早点写的,毕竟现在已经2月份了。不过一些其它原因,或者是我的惰性、还有一些迷茫的念头,让自己迟迟没有试着写一点东西,记录下,或者说是总结下自己前3年的工作上的经历、学习的过程。 我自己知道的,在写自己的博客方面,我的文笔很一般,非技术类的文章不想去写;另外我又是一个还比较热衷于技术的人,而平常复杂一点的东西,如果想写文章写的清楚点,是需要足够...
别低估自己的直觉,也别高估自己的智商
所有群全部吵翻天,朋友圈全部沦陷,公众号疯狂转发。这两周没怎么发原创,只发新闻,可能有人注意到了。我不是懒,是文章写了却没发,因为大家的关注力始终在这次的疫情上面,发了也没人看。当然,我...
Java坑人面试题系列: 包装类(中级难度)
Java Magazine上面有一个专门坑人的面试题系列: https://blogs.oracle.com/javamagazine/quiz-2。 这些问题的设计宗旨,主要是测试面试者对Java语言的了解程度,而不是为了用弯弯绕绕的手段把面试者搞蒙。 如果你看过往期的问题,就会发现每一个都不简单。 这些试题模拟了认证考试中的一些难题。 而 “中级(intermediate)” 和 “高级(ad
深度学习入门笔记(十八):卷积神经网络(一)
欢迎关注WX公众号:【程序员管小亮】 专栏——深度学习入门笔记 声明 1)该文章整理自网上的大牛和机器学习专家无私奉献的资料,具体引用的资料请看参考文献。 2)本文仅供学术交流,非商用。所以每一部分具体的参考资料并没有详细对应。如果某部分不小心侵犯了大家的利益,还望海涵,并联系博主删除。 3)博主才疏学浅,文中如有不当之处,请各位指出,共同进步,谢谢。 4)此属于第一版本,若有错误,还需继续修正与...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
节后首个工作日,企业们集体开晨会让钉钉挂了
By 超神经场景描述:昨天 2 月 3 日,是大部分城市号召远程工作的第一天,全国有接近 2 亿人在家开始远程办公,钉钉上也有超过 1000 万家企业活跃起来。关键词:十一出行 人脸...
Java基础知识点梳理
Java基础知识点梳理 摘要: 虽然已经在实际工作中经常与java打交道,但是一直没系统地对java这门语言进行梳理和总结,掌握的知识也比较零散。恰好利用这段时间重新认识下java,并对一些常见的语法和知识点做个总结与回顾,一方面为了加深印象,方便后面查阅,一方面为了学好java打下基础。 Java简介 java语言于1995年正式推出,最开始被命名为Oak语言,由James Gosling(詹姆
2020年全新Java学习路线图,含配套视频,学完即为中级Java程序员!!
新的一年来临,突如其来的疫情打破了平静的生活! 在家的你是否很无聊,如果无聊就来学习吧! 世上只有一种投资只赚不赔,那就是学习!!! 传智播客于2020年升级了Java学习线路图,硬核升级,免费放送! 学完你就是中级程序员,能更快一步找到工作! 一、Java基础 JavaSE基础是Java中级程序员的起点,是帮助你从小白到懂得编程的必经之路。 在Java基础板块中有6个子模块的学
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
你也能看懂的:蒙特卡罗方法
蒙特卡罗方法,也称统计模拟方法,是1940年代中期由于科学技术的发展和电子计算机的发明,而提出的一种以概率统计理论为指导的数值计算方法。是指使用随机数(或更常见的伪随机数)来解决很多计算问题的方法 蒙特卡罗方法可以粗略地分成两类:一类是所求解的问题本身具有内在的随机性,借助计算机的运算能力可以直接模拟这种随机的过程。另一种类型是所求解问题可以转化为某种随机分布的特征数,比如随机事件出现的概率,或...
相关热词 c# 为空 判断 委托 c#记事本颜色 c# 系统默认声音 js中调用c#方法参数 c#引入dll文件报错 c#根据名称实例化 c#从邮件服务器获取邮件 c# 保存文件夹 c#代码打包引用 c# 压缩效率
立即提问

相似问题