在训练Tensorflow模型(object_detection)时,训练在第一次评估后退出,怎么使训练继续下去? 5C

当我进行ssd模型训练时,训练进行了10分钟,然后进入评估阶段,评估之后程序就自动退出了,没有看到误和警告,这是为什么,怎么让程序一直训练下去?

训练命令:

python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr

配置文件:

training exit after the first evaluation(only one evaluation) in Tensorflow model(object_detection) without error and waring

System information

What is the top-level directory of the model you are using:models/research/object_detection/
Have I written custom code (as opposed to using a stock example script provided in TensorFlow):NO
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Windows-10(64bit)
TensorFlow installed from (source or binary):conda install tensorflow-gpu
TensorFlow version (use command below):1.13.1
Bazel version (if compiling from source):N/A
CUDA/cuDNN version:cudnn-7.6.0
GPU model and memory:GeForce GTX 1060 6GB
Exact command to reproduce:See below
my command for training :

python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr
This is my config :

train_config {
batch_size: 24
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
ssd_random_crop {
}
}
optimizer {
rms_prop_optimizer {
learning_rate {
exponential_decay_learning_rate {
initial_learning_rate: 0.00400000018999
decay_steps: 800720
decay_factor: 0.949999988079
}
}
momentum_optimizer_value: 0.899999976158
decay: 0.899999976158
epsilon: 1.0
}
}
fine_tune_checkpoint: "D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt"
from_detection_checkpoint: true
num_steps: 200000

train_input_reader {
label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt"
tf_record_input_reader {
input_path: "D:/gitcode/models/research/object_detection/idol/train/Iframe_??????.tfrecord"
}
}
eval_config {
num_examples: 8000
max_evals: 10
use_moving_averages: false
}
eval_input_reader {
label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt"
shuffle: false
num_readers: 1
tf_record_input_reader {
input_path: "D:/gitcode/models/research/object_detection/idol/eval/Iframe_??????.tfrecord"
}

窗口输出:
(default) D:\gitcode\models\research>python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:Forced number of epochs for all eval validations to be 1.
WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered eval_on_train_input_config.num_epochs = 0. Overwriting num_epochs to 1.
WARNING:tensorflow:Estimator's model_fn () includes params argument, but params are not passed to Estimator.
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:86: parallel_interleave (from tensorflow.contrib.data.python.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.data.experimental.parallel_interleave(...).
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\core\preprocessor.py:196: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version.
Instructions for updating:
seed2 arg is deprecated.Use sample_distorted_bounding_box_v2 instead.
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:158: batch_and_drop_remainder (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.data.Dataset.batch(..., drop_remainder=True).
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\losses\losses_impl.py:448: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\array_grad.py:425: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
2019-08-14 16:29:31.607841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7845
pciBusID: 0000:04:00.0
totalMemory: 6.00GiB freeMemory: 4.97GiB
2019-08-14 16:29:31.621836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-08-14 16:29:32.275712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-14 16:29:32.283072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-08-14 16:29:32.288675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-08-14 16:29:32.293514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1)
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\eval_util.py:796: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\utils\visualization_utils.py:498: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, use
tf.py_function, which takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means tf.py_functions can use accelerators such as GPUs as well as
being differentiable using a gradient tape.

2019-08-14 16:41:44.736212: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-08-14 16:41:44.741242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-08-14 16:41:44.747522: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-08-14 16:41:44.751256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-08-14 16:41:44.755548: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1)
WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\training\saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
creating index...
index created!
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=2.43s).
Accumulating evaluation results...
DONE (t=0.14s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.287
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.529
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.278
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.031
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.312
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.162
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.356
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.356
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.061
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.384
(default) D:\gitcode\models\research>

weixin_45369878
weixin_45369878 请问解决了吗
10 个月之前 回复

1个回答

weixin_45369878
weixin_45369878 回复qq_20160829: 我也这个错误请问解决了吗
10 个月之前 回复
qq_20160829
qq_20160829 这个例子用的是train.py ,我用的是新的脚本model_main.py
10 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Tensorflow Object Detection API使用,不训练可以修改pipeline.config文件吗?

在使用这个API的时候,我下载了github上的 _faster_rcnn_inception_v2_coco_2018_01_28 这个模型。 现在我用这个模型测试自己的图片,但是我想对这个模型的pipeline.config文件进行一些调整,比如说:将momentum_optimizer 改为adam这种,以及调整iou阈值这种参数。 我不想再次训练一个模型,有没有什么是不需要训练模型,可以调整模型的config文件的方法? ![图片说明](https://img-ask.csdn.net/upload/202002/29/1582972058_335720.png)

Tensorflow object-detection api 报错

我尝试使用ssd_mobilenet_v1模型,报错TypeError: `pred` must be a Tensor, or a Python bool, or 1 or 0. Found instead: None 不知道是什么原因引起的,is_training改成true的方法我已经试过了,没有用

用tensorflow-gpu跑SSD-Mobilenet模型隔一段时间就会出现以下内容

![图片说明](https://img-ask.csdn.net/upload/201903/16/1552739746_369680.png) 我用的以下命令python object____detection/train.py --train_dir object_detection/train --pipeline_config_path object__detection/ssd__model/ssd_mobilenet_v1_pets.config___ 然后在object_detection 目录下没有见到train文件夹 这正常吗,我之前用CPU跑的时候很快就创建了train文件夹

Tensorflow object detection api 训练自己数据 map一直是 -1

使用Tensorflow object detection api 训练自己的数据 map 一直是-1.loss一直也很低。 结果是这样的: ![图片说明](https://img-ask.csdn.net/upload/201910/14/1571048294_159662.png) loss: ![图片说明](https://img-ask.csdn.net/upload/201910/15/1571105856_282377.jpg) 使用的模型是:model zoo的这个![图片说明](https://img-ask.csdn.net/upload/201910/14/1571047856_17157.jpg) piplineConfig 如下: ``` model { faster_rcnn { num_classes: 25 image_resizer { keep_aspect_ratio_resizer { min_dimension: 720 max_dimension: 1280 } } feature_extractor { type: "faster_rcnn_resnet50" first_stage_features_stride: 16 } first_stage_anchor_generator { grid_anchor_generator { height_stride: 16 width_stride: 16 scales: 0.25 scales: 0.5 scales: 1.0 scales: 2.0 aspect_ratios: 0.5 aspect_ratios: 1.0 aspect_ratios: 2.0 } } first_stage_box_predictor_conv_hyperparams { op: CONV regularizer { l2_regularizer { weight: 0.0 } } initializer { truncated_normal_initializer { stddev: 0.00999999977648 } } } first_stage_nms_score_threshold: 0.0 first_stage_nms_iou_threshold: 0.699999988079 first_stage_max_proposals: 100 first_stage_localization_loss_weight: 2.0 first_stage_objectness_loss_weight: 1.0 initial_crop_size: 14 maxpool_kernel_size: 2 maxpool_stride: 2 second_stage_box_predictor { mask_rcnn_box_predictor { fc_hyperparams { op: FC regularizer { l2_regularizer { weight: 0.0 } } initializer { variance_scaling_initializer { factor: 1.0 uniform: true mode: FAN_AVG } } } use_dropout: false dropout_keep_probability: 1.0 } } second_stage_post_processing { batch_non_max_suppression { score_threshold: 0.300000011921 iou_threshold: 0.600000023842 max_detections_per_class: 100 max_total_detections: 100 } score_converter: SOFTMAX } second_stage_localization_loss_weight: 2.0 second_stage_classification_loss_weight: 1.0 } } train_config { batch_size: 1 data_augmentation_options { random_horizontal_flip { } } optimizer { momentum_optimizer { learning_rate { manual_step_learning_rate { initial_learning_rate: 0.000300000014249 schedule { step: 900000 learning_rate: 2.99999992421e-05 } schedule { step: 1200000 learning_rate: 3.00000010611e-06 } } } momentum_optimizer_value: 0.899999976158 } use_moving_average: false } gradient_clipping_by_norm: 10.0 fine_tune_checkpoint: "/home/yons/code/自动驾驶视觉综合感知/faster_rcnn_resnet50_coco_2018_01_28/model.ckpt" from_detection_checkpoint: true num_steps: 200000 } train_input_reader { label_map_path: "/home/yons/code/自动驾驶视觉综合感知/pascal_label_map.pbtxt" tf_record_input_reader { input_path: "/home/yons/data/自动驾驶视觉综合感知/train_dataset/tfRecord/train/coco_train.record" } } eval_config { num_examples: 200 max_evals: 10 use_moving_averages: false metrics_set: "coco_detection_metrics" } eval_input_reader { label_map_path: "/home/yons/code/自动驾驶视觉综合感知/pascal_label_map.pbtxt" shuffle: false num_readers: 1 tf_record_input_reader { input_path: "/home/yons/data/自动驾驶视觉综合感知/train_dataset/tfRecord/val/coco_val.record" } } ``` label_map配置: ``` item { id: 1 name: 'red' } item { id: 2 name: 'green' } item { id: 3 name: 'yellow' } item { id: 4 name: 'red_left' } item { id: 5 name: 'red_right' } item { id: 6 name: 'yellow_left' } item { id: 7 name: 'yellow_right' } item { id: 8 name: 'green_left' } item { id: 9 name: 'green_right' } item { id: 10 name: 'red_forward' } item { id: 11 name: 'green_forward' } item { id: 12 name: 'yellow_forward' } item { id:13 name: 'horizon_red' } item { id: 14 name: 'horizon_green' } item { id: 15 name: 'horizon_yellow' } item { id: 16 name: 'off' } item { id: 17 name: 'traffic_sign' } item { id: 18 name: 'car' } item { id: 19 name: 'motor' } item { id: 20 name: 'bike' } item { id: 21 name: 'bus' } item { id: 22 name: 'truck' } item { id: 23 name: 'suv' } item { id: 24 name: 'express' } item { id: 25 name: 'person' } ``` 自己解析数据tfrecord: ![图片说明](https://img-ask.csdn.net/upload/201910/14/1571048123_764545.png) ![图片说明](https://img-ask.csdn.net/upload/201910/14/1571048152_258259.png)

训练SSD_mobilenet_v1_coco模型时tensorflow出现以下报错,请问原因是什么 怎么解决?

tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0] = 0 is not in [0, 0) [[{{node GatherV2_4}}]] [[node IteratorGetNext (defined at C:\Users\lenovo\Anaconda3\lib\site-packages\tensorflow_estimator\python\estimator\util.py:110) ]] ``` ```

Tensorflow object detection API 训练自己数据时报错 Windows fatal exception: access violation

python3.6, tf 1.14.0,Tensorflow object detection API 跑demo图片和改为摄像头进行物体识别均正常, 训练自己的数据训练自己数据时报错 Windows fatal exception: access violation 用的ssd_mobilenet_v1_coco_2018_01_28模型, 命令:python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr 其实就是按照网上基础的训练来的,一直报这个,具体错误输出如下: (py36) D:\pythonpro\TensorFlowLearn\face_tf_model>python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr WARNING: Logging before flag parsing goes to stderr. W0622 16:50:30.230578 14180 lazy_loader.py:50] The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue. W0622 16:50:30.317274 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\inception_resnet_v2.py:373: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead. W0622 16:50:30.355400 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\mobilenet\mobilenet.py:397: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead. W0622 16:50:30.388313 14180 deprecation_wrapper.py:119] From model_main.py:109: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead. W0622 16:50:30.397290 14180 deprecation_wrapper.py:119] From D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py:98: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead. Windows fatal exception: access violation Current thread 0x00003764 (most recent call first): File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 84 in _preread_check File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 122 in read File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py", line 99 in get_configs_from_pipeline_file File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\model_lib.py", line 606 in create_estimator_and_inputs File "model_main.py", line 71 in main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 251 in _run_main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 300 in run File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\platform\app.py", line 40 in run File "model_main.py", line 109 in <module> (py36) D:\pythonpro\TensorFlowLearn\face_tf_model> 请大神指点下

TensorFlow1.14.0 训练自己模型出错 Windows fatal exception: access violation Current thread 0x00002c38 (most recent call first)

W0424 18:27:53.741117 11320 deprecation_wrapper.py:119] From D:\Anaconda\envs\tensorflow\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\data_decoders\tf_example_decoder.py:197: The name tf.VarLenFeature is deprecated. Please use tf.io.VarLenFeature instead. Windows fatal exception: access violation Current thread 0x00002c38 (most recent call first): File "D:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 84 in _preread_check File "D:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 122 in read File "D:\Anaconda\envs\tensorflow\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\label_map_util.py", line 139 in load_labelmap File "D:\Anaconda\envs\tensorflow\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\label_map_util.py", line 172 in get_label_map_dict File "D:\Anaconda\envs\tensorflow\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\data_decoders\tf_example_decoder.py", line 64 in __init__ File "D:\Anaconda\envs\tensorflow\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\data_decoders\tf_example_decoder.py", line 319 in __init__ File "D:\Anaconda\envs\tensorflow\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\builders\dataset_builder.py", line 130 in build File "train.py", line 122 in get_next File "D:\Anaconda\envs\tensorflow\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\legacy\trainer.py", line 60 in create_input_queue File "D:\Anaconda\envs\tensorflow\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\legacy\trainer.py", line 281 in train File "train.py", line 181 in main File "D:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\util\deprecation.py", line 324 in new_func File "D:\Anaconda\envs\tensorflow\lib\site-packages\absl\app.py", line 250 in _run_main File "D:\Anaconda\envs\tensorflow\lib\site-packages\absl\app.py", line 299 in run File "D:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 40 in run File "train.py", line 185 in <module> ``` ``` 两三天了还没有解决,尝试换到1.12.0-gpu版本会出现tensorflow._api.compat没有v1的问题。求大神解答!!!!

tensorflow gpu训练objectdetectionapi 卡住不动了 gpu占满利用率0是什么原因

tensorflow gpu训练objectdetectionapi 卡住不动了 gpu占满利用率0是什么原因

Tensorflow 多GPU并行训练,模型收敛su'du'ma

在使用多GPU并行训练深度学习神经网络时,以TFrecords 形式读取MNIST数据的训练数据进行训练,发现比直接用MNIST训练数据训练相同模型时,发现前者收敛速度慢和运行时间长,已知模型没有问题,想请大神帮忙看看是什么原因导致运行速度慢,运行时间长 ``` import os import time import numpy as np import tensorflow as tf from datetime import datetime import tensorflow.compat.v1 as v1 from tensorflow.examples.tutorials.mnist import input_data BATCH_SIZE = 100 LEARNING_RATE = 1e-4 LEARNING_RATE_DECAY = 0.99 REGULARZTION_RATE = 1e-4 EPOCHS = 10000 MOVING_AVERAGE_DECAY = 0.99 N_GPU = 2 MODEL_SAVE_PATH = r'F:\model\log_dir' MODEL_NAME = 'model.ckpt' TRAIN_PATH = r'F:\model\threads_file\MNIST_data_tfrecords\train.tfrecords' TEST_PATH = r'F:\model\threads_file\MNIST_data_tfrecords\test.tfrecords' def __int64_feature(value): return v1.train.Feature(int64_list=v1.train.Int64List(value=[value])) def __bytes_feature(value): return v1.train.Feature(bytes_list=v1.train.BytesList(value=[value])) def creat_tfrecords(path, data, labels): writer = tf.io.TFRecordWriter(path) for i in range(len(data)): image = data[i].tostring() label = labels[i] examples = v1.train.Example(features=v1.train.Features(feature={ 'image': __bytes_feature(image), 'label': __int64_feature(label) })) writer.write(examples.SerializeToString()) writer.close() def parser(record): features = v1.parse_single_example(record, features={ 'image': v1.FixedLenFeature([], tf.string), 'label': v1.FixedLenFeature([], tf.int64) }) image = tf.decode_raw(features['image'], tf.uint8) image = tf.reshape(image, [28, 28, 1]) image = tf.cast(image, tf.float32) label = tf.cast(features['label'], tf.int32) label = tf.one_hot(label, 10, on_value=1, off_value=0) return image, label def get_input(batch_size, path): dataset = tf.data.TFRecordDataset([path]) dataset = dataset.map(parser) dataset = dataset.shuffle(10000) dataset = dataset.repeat(100) dataset = dataset.batch(batch_size) iterator = dataset.make_one_shot_iterator() image, label = iterator.get_next() return image, label def model_inference(images, labels, rate, regularzer=None, reuse_variables=None): with v1.variable_scope(v1.get_variable_scope(), reuse=reuse_variables): with tf.compat.v1.variable_scope('First_conv'): w1 = tf.compat.v1.get_variable('weights', [3, 3, 1, 32], tf.float32, initializer=tf.compat.v1.truncated_normal_initializer(stddev=0.1)) if regularzer: tf.add_to_collection('losses', regularzer(w1)) b1 = tf.compat.v1.get_variable('biases', [32], tf.float32, initializer=tf.compat.v1.constant_initializer(0.1)) activation1 = tf.nn.relu(tf.nn.conv2d(images, w1, strides=[1, 1, 1, 1], padding='SAME') + b1) out1 = tf.nn.max_pool2d(activation1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') with tf.compat.v1.variable_scope('Second_conv'): w2 = tf.compat.v1.get_variable('weight', [3, 3, 32, 64], tf.float32, initializer=tf.compat.v1.truncated_normal_initializer(stddev=0.1)) if regularzer: tf.add_to_collection('losses', regularzer(w2)) b2 = tf.compat.v1.get_variable('biases', [64], tf.float32, initializer=tf.compat.v1.constant_initializer(0.1)) activation2 = tf.nn.relu(tf.nn.conv2d(out1, w2, strides=[1, 1, 1, 1], padding='SAME') + b2) out2 = tf.nn.max_pool2d(activation2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') out3 = tf.reshape(out2, [-1, 7*7*64], name='flatten') with tf.compat.v1.variable_scope('FC_1'): w3 = tf.compat.v1.get_variable('weight', [7*7*64, 1024], tf.float32, initializer=tf.compat.v1.truncated_normal_initializer(stddev=0.1)) if regularzer: tf.add_to_collection('losses', regularzer(w3)) b3 = tf.compat.v1.get_variable('biases', [1024], tf.float32, initializer=tf.compat.v1.constant_initializer(0.1)) activation3 = tf.nn.relu(tf.matmul(out3, w3) + b3) out4 = tf.nn.dropout(activation3, keep_prob=rate) with tf.compat.v1.variable_scope('FC_2'): w4 = tf.compat.v1.get_variable('weight', [1024, 10], tf.float32, initializer=tf.compat.v1.truncated_normal_initializer(stddev=0.1)) if regularzer: tf.add_to_collection('losses', regularzer(w4)) b4 = tf.compat.v1.get_variable('biases', [10], tf.float32, initializer=tf.compat.v1.constant_initializer(0.1)) output = tf.nn.softmax(tf.matmul(out4, w4) + b4) with tf.compat.v1.variable_scope('Loss_entropy'): if regularzer: loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(labels, 1), logits=output)) \ + tf.add_n(tf.get_collection('losses')) else: loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(labels, 1), logits=output)) with tf.compat.v1.variable_scope('Accuracy'): correct_data = tf.equal(tf.math.argmax(labels, 1), tf.math.argmax(output, 1)) accuracy = tf.reduce_mean(tf.cast(correct_data, tf.float32, name='accuracy')) return output, loss, accuracy def average_gradients(tower_grads): average_grads = [] for grad_and_vars in zip(*tower_grads): grads = [] for g, v2 in grad_and_vars: expanded_g = tf.expand_dims(g, 0) grads.append(expanded_g) grad = tf.concat(grads, 0) grad = tf.reduce_mean(grad, 0) v = grad_and_vars[0][1] grad_and_var = (grad, v) average_grads.append(grad_and_var) return average_grads def main(argv=None): with tf.Graph().as_default(), tf.device('/cpu:0'): x, y = get_input(batch_size=BATCH_SIZE, path=TRAIN_PATH) regularizer = tf.contrib.layers.l2_regularizer(REGULARZTION_RATE) global_step = v1.get_variable('global_step', [], initializer=v1.constant_initializer(0), trainable=False) lr = v1.train.exponential_decay(LEARNING_RATE, global_step, 55000/BATCH_SIZE, LEARNING_RATE_DECAY) opt = v1.train.AdamOptimizer(lr) tower_grads = [] reuse_variables = False device = ['/gpu:0', '/cpu:0'] for i in range(len(device)): with tf.device(device[i]): with v1.name_scope(device[i][1:4] + '_0') as scope: out, cur_loss, acc = model_inference(x, y, 0.3, regularizer, reuse_variables) reuse_variables = True grads = opt.compute_gradients(cur_loss) tower_grads.append(grads) grads = average_gradients(tower_grads) for grad, var in grads: if grad is not None: v1.summary.histogram('gradients_on_average/%s' % var.op.name, grad) apply_gradient_op = opt.apply_gradients(grads, global_step) for var in v1.trainable_variables(): tf.summary.histogram(var.op.name, var) variable_averages = v1.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) variable_to_average = (v1.trainable_variables() + v1.moving_average_variables()) variable_averages_op = variable_averages.apply(variable_to_average) train_op = tf.group(apply_gradient_op, variable_averages_op) saver = v1.train.Saver(max_to_keep=1) summary_op = v1.summary.merge_all() # merge_all 可以将所有summary全部保存到磁盘 init = v1.global_variables_initializer() with v1.Session(config=v1.ConfigProto(allow_soft_placement=True, log_device_placement=True)) as sess: init.run() summary_writer = v1.summary.FileWriter(MODEL_SAVE_PATH, sess.graph) # 指定一个文件用来保存图 for step in range(EPOCHS): try: start_time = time.time() _, loss_value, out_value, acc_value = sess.run([train_op, cur_loss, out, acc]) duration = time.time() - start_time if step != 0 and step % 100 == 0: num_examples_per_step = BATCH_SIZE * N_GPU examples_per_sec = num_examples_per_step / duration sec_per_batch = duration / N_GPU format_str = '%s: step %d, loss = %.2f(%.1f examples/sec; %.3f sec/batch), accuracy = %.2f' print(format_str % (datetime.now(), step, loss_value, examples_per_sec, sec_per_batch, acc_value)) summary = sess.run(summary_op) summary_writer.add_summary(summary, step) if step % 100 == 0 or (step + 1) == EPOCHS: checkpoint_path = os.path.join(MODEL_SAVE_PATH, MODEL_NAME) saver.save(sess, checkpoint_path, global_step=step) except tf.errors.OutOfRangeError: break if __name__ == '__main__': tf.app.run() ```

使用tesorflow中model_main.py遇到的问题!

直接上代码了 ``` D:\tensorflow\models\research\object_detection>python model_main.py --pipeline_config_path=E:\python_demo\pedestrian_demo\pedestrian_train\models\pipeline.config --model_dir=E:\python_demo\pedestrian_demo\pedestrian_train\models\train --num_train_steps=5000 --sample_1_of_n_eval_examples=1 --alsologstderr Traceback (most recent call last): File "model_main.py", line 109, in <module> tf.app.run() File "C:\anaconda\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run _sys.exit(main(argv)) File "model_main.py", line 71, in main FLAGS.sample_1_of_n_eval_on_train_examples)) File "D:\ssd-detection\models-master\research\object_detection\model_lib.py", line 589, in create_estimator_and_inputs pipeline_config_path, config_override=config_override) File "D:\ssd-detection\models-master\research\object_detection\utils\config_util.py", line 98, in get_configs_from_pipeline_file text_format.Merge(proto_str, pipeline_config) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 574, in Merge descriptor_pool=descriptor_pool) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 631, in MergeLines return parser.MergeLines(lines, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 654, in MergeLines self._ParseOrMerge(lines, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 676, in _ParseOrMerge self._MergeField(tokenizer, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 768, in _MergeField (message_descriptor.full_name, name)) google.protobuf.text_format.ParseError: 35:7 : Message type "object_detection.protos.SsdFeatureExtractor" has no field named "batch_norm_trainable". ``` 这个错误怎么解决,求大神指导~

Tensorflow+GPU做物体检测,CPU和内存都高占用?

如题, 我在用Tensorflow Object Detection做物体检测的时候, 用mobilenetV1模型, 然后在session运行的时候发现占用的CPU很高, i7的占到了80%, 很不解用到CPU做了什么, 请大神解答...

怎么在tensorflow框架上组装ssd-mobilenetv3

我想实现 mobilenetv3与ssd结合,现在我已有tensorflow的ssd源码,ssd源码是vgg主网络结构 我需要把vgg改成mobilenetv3,从而实现新的ssd目标检测。 请问 我该怎么进行组装在tensorflow框架上? 是只把ssd内部的vgg替换吗 还是需要改订其他一些程序,求大神指导一波,不胜感激! 现在我只把程序里的vgg网络模块替换成mobileNetv3模块修改完后才呢过后 进行fine-tuning训练,训练3万次跑出来的结果很不理想,现在粘贴一些我操作的图片,让各位大神看看, 希望能够得到指导! 如果能解决为问题 必有重谢!! 图1是 我在程序里替换 ![图片说明](https://img-ask.csdn.net/upload/201912/30/1577689264_646355.jpg) 图2是 我下载mobileNetv3预训练 ![图片说明](https://img-ask.csdn.net/upload/201912/30/1577689304_202300.jpg) 图3是 我准备训练填写的一些参数 ![图片说明](https://img-ask.csdn.net/upload/201912/30/1577689325_156789.jpg) 图4是 我训练完成后 测试得到的mAP ![图片说明](https://img-ask.csdn.net/upload/201912/30/1577689335_185248.jpg)

tensorflow 目标检测,获得包围盒常规坐标的定位信息

目标检测ssd和fast rcnn等算法可以识别并定位物体,可是应该如何在框出目标物体时,能显示物体中心或者边框的xy常规坐标,实现了一些代码,但存在问题,求大神帮忙 ``` def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # 获得图中所有op ops = tf.get_default_graph().get_operations() # 获得输出op的名字 all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' # 如果tensor_name在all_tensor_names中 if tensor_name in all_tensor_names: # 则获取到该tensor tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[1], image.shape[2]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) # 图片输入的tensor image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # 传入图片运行模型获得结果 output_dict = sess.run(tensor_dict, feed_dict={image_tensor: image}) # 所有的结果都是float32类型的,有些数据需要做数据格式转换 # 检测到目标的数量 output_dict['num_detections'] = int(output_dict['num_detections'][0]) # 目标的类型 output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) # 预测框坐标 output_dict['detection_boxes'] = output_dict['detection_boxes'][0] # 预测框置信度 output_dict['detection_scores'] = output_dict['detection_scores'][0] boxes = np.squeeze(output_dict['detection_boxes']) scores = np.squeeze(output_dict['detection_scores']) #set a min thresh score, say 0.8 min_score_thresh = 0.8 bboxes = boxes[scores > min_score_thresh] #get image size im_width, im_height = image.size final_box = [] for box in range(bboxes): ymin, xmin, ymax, xmax = box final_box.append([xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height]) return output_dict ``` #for root,dirs,files in os.walk('test_images/'): for root,dirs,files in os.walk('test/'): for image_path in files: # 读取图片 image = Image.open(os.path.join(root,image_path)) # 把图片数据变成3维的数据,定义数据类型为uint8 image_np = load_image_into_numpy_array(image) # 增加一个维度,数据变成: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # 目标检测 output_dict = run_inference_for_single_image(image_np_expanded, detection_graph) # 给原图加上预测框,置信度和类别信息 vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, use_normalized_coordinates=True, line_thickness=8) # 画图 # print ("box : ", final_box) plt.figure(figsize=(12,8)) plt.imshow(image_np) plt.axis('off') plt.show() ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-32205908683b> in <module> 9 image_np_expanded = np.expand_dims(image_np, axis=0) 10 # 目标检测 ---> 11 output_dict = run_inference_for_single_image(image_np_expanded, detection_graph) 12 # 给原图加上预测框,置信度和类别信息 13 vis_util.visualize_boxes_and_labels_on_image_array( <ipython-input-23-2044b0b101cc> in run_inference_for_single_image(image, graph) 56 bboxes = boxes[scores > min_score_thresh] 57 #get image size ---> 58 im_width, im_height = image.size 59 final_box = [] 60 for box in range(bboxes): TypeError: 'int' object is not iterable ```![图片说明](https://img-ask.csdn.net/upload/201908/25/1566710175_880656.jpg)![图片说明](https://img-ask.csdn.net/upload/201908/25/1566710262_253891.jpg)

关于SSD目标检测训练时,不报错但是训练慢与一直停在这里的问题。

![图片说明](https://img-ask.csdn.net/upload/202004/07/1586265452_657504.png) ![图片说明](https://img-ask.csdn.net/upload/202004/07/1586265474_298038.png) 图一是我一开始训练的时候显示出来的,这个红色方框是我认为是正在训练的,并且红色这个框,每分钟100步,我是使用gpu训练的。但是到了后面,就10分钟才运行一次,一直卡在图2,每10分钟自动保存一个checkpoint,我查了相关资料,这是因为estimator的问题,10分钟会自动保存。 直到后面,我cpu和gpu不动了,也就是占用率不高了(这证明没有在训练),这是为什么??

pyqt5中如何通过OpenCV读取一帧图像喂入网络呢?

我想通过pyqt5制作一个UI界面封装google object detection api的示例代码,源代码中是识别单张图片,我想通过摄像头输入一帧的图像然后进行识别显示。整个程序如下: ``` # coding:utf-8 ''' V3.0A版本,尝试实现摄像头识别 ''' import numpy as np import cv2 import os import os.path import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile import pylab from distutils.version import StrictVersion from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.QtWidgets import * from PyQt5.QtCore import * from PyQt5.QtGui import * class UiForm(): openfile_name_pb = '' openfile_name_pbtxt = '' openpic_name = '' num_class = 0 def setupUi(self, Form): Form.setObjectName("Form") Form.resize(600, 690) Form.setMinimumSize(QtCore.QSize(600, 690)) Form.setMaximumSize(QtCore.QSize(600, 690)) self.frame = QtWidgets.QFrame(Form) self.frame.setGeometry(QtCore.QRect(20, 20, 550, 100)) self.frame.setFrameShape(QtWidgets.QFrame.StyledPanel) self.frame.setFrameShadow(QtWidgets.QFrame.Raised) self.frame.setObjectName("frame") self.horizontalLayout_2 = QtWidgets.QHBoxLayout(self.frame) self.horizontalLayout_2.setObjectName("horizontalLayout_2") # 加载模型文件按钮 self.btn_add_file = QtWidgets.QPushButton(self.frame) self.btn_add_file.setObjectName("btn_add_file") self.horizontalLayout_2.addWidget(self.btn_add_file) # 加载pbtxt文件按钮 self.btn_add_pbtxt = QtWidgets.QPushButton(self.frame) self.btn_add_pbtxt.setObjectName("btn_add_pbtxt") self.horizontalLayout_2.addWidget(self.btn_add_pbtxt) # 输入检测类别数目按钮 self.btn_enter = QtWidgets.QPushButton(self.frame) self.btn_enter.setObjectName("btn_enter") self.horizontalLayout_2.addWidget(self.btn_enter) # 打开摄像头 self.btn_opencam = QtWidgets.QPushButton(self.frame) self.btn_opencam.setObjectName("btn_objdec") self.horizontalLayout_2.addWidget(self.btn_opencam) # 开始识别按钮 self.btn_objdec = QtWidgets.QPushButton(self.frame) self.btn_objdec.setObjectName("btn_objdec") self.horizontalLayout_2.addWidget(self.btn_objdec) # 退出按钮 self.btn_exit = QtWidgets.QPushButton(self.frame) self.btn_exit.setObjectName("btn_exit") self.horizontalLayout_2.addWidget(self.btn_exit) # 显示识别后的画面 self.lab_rawimg_show = QtWidgets.QLabel(Form) self.lab_rawimg_show.setGeometry(QtCore.QRect(50, 140, 500, 500)) self.lab_rawimg_show.setMinimumSize(QtCore.QSize(500, 500)) self.lab_rawimg_show.setMaximumSize(QtCore.QSize(500, 500)) self.lab_rawimg_show.setObjectName("lab_rawimg_show") self.lab_rawimg_show.setStyleSheet(("border:2px solid red")) self.retranslateUi(Form) # 这里将按钮和定义的动作相连,通过click信号连接openfile槽? self.btn_add_file.clicked.connect(self.openpb) # 用于打开pbtxt文件 self.btn_add_pbtxt.clicked.connect(self.openpbtxt) # 用于用户输入类别数 self.btn_enter.clicked.connect(self.enter_num_cls) # 打开摄像头 self.btn_opencam.clicked.connect(self.opencam) # 开始识别 # ~ self.btn_objdec.clicked.connect(self.object_detection) # 这里是将btn_exit按钮和Form窗口相连,点击按钮发送关闭窗口命令 self.btn_exit.clicked.connect(Form.close) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): _translate = QtCore.QCoreApplication.translate Form.setWindowTitle(_translate("Form", "目标检测")) self.btn_add_file.setText(_translate("Form", "加载模型文件")) self.btn_add_pbtxt.setText(_translate("Form", "加载pbtxt文件")) self.btn_enter.setText(_translate("From", "指定识别类别数")) self.btn_opencam.setText(_translate("Form", "打开摄像头")) self.btn_objdec.setText(_translate("From", "开始识别")) self.btn_exit.setText(_translate("Form", "退出")) self.lab_rawimg_show.setText(_translate("Form", "识别效果")) def openpb(self): global openfile_name_pb openfile_name_pb, _ = QFileDialog.getOpenFileName(self.btn_add_file,'选择pb文件','/home/kanghao/','pb_files(*.pb)') print('加载模型文件地址为:' + str(openfile_name_pb)) def openpbtxt(self): global openfile_name_pbtxt openfile_name_pbtxt, _ = QFileDialog.getOpenFileName(self.btn_add_pbtxt,'选择pbtxt文件','/home/kanghao/','pbtxt_files(*.pbtxt)') print('加载标签文件地址为:' + str(openfile_name_pbtxt)) def opencam(self): self.camcapture = cv2.VideoCapture(0) self.timer = QtCore.QTimer() self.timer.start() self.timer.setInterval(100) # 0.1s刷新一次 self.timer.timeout.connect(self.camshow) def camshow(self): global camimg _ , camimg = self.camcapture.read() print(_) camimg = cv2.resize(camimg, (512, 512)) camimg = cv2.cvtColor(camimg, cv2.COLOR_BGR2RGB) print(type(camimg)) #strcamimg = camimg.tostring() showImage = QtGui.QImage(camimg.data, camimg.shape[1], camimg.shape[0], QtGui.QImage.Format_RGB888) self.lab_rawimg_show.setPixmap(QtGui.QPixmap.fromImage(showImage)) def enter_num_cls(self): global num_class num_class, okPressed = QInputDialog.getInt(self.btn_enter,'指定训练类别数','你的目标有多少类?',1,1,28,1) if okPressed: print('识别目标总类为:' + str(num_class)) def img2pixmap(self, image): Y, X = image.shape[:2] self._bgra = np.zeros((Y, X, 4), dtype=np.uint8, order='C') self._bgra[..., 0] = image[..., 2] self._bgra[..., 1] = image[..., 1] self._bgra[..., 2] = image[..., 0] qimage = QtGui.QImage(self._bgra.data, X, Y, QtGui.QImage.Format_RGB32) pixmap = QtGui.QPixmap.fromImage(qimage) return pixmap def object_detection(self): sys.path.append("..") from object_detection.utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') from utils import label_map_util from utils import visualization_utils as vis_util # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = openfile_name_pb # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = openfile_name_pbtxt NUM_CLASSES = num_class detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. TEST_IMAGE_PATHS = camimg print(TEST_IMAGE_PATHS) # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict #image = Image.open(TEST_IMAGE_PATHS) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(TEST_IMAGE_PATHS) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) #plt.savefig(str(TEST_IMAGE_PATHS)+".jpg") ## 用于显示ui界面的命令 if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) Window = QtWidgets.QWidget() # ui为根据类Ui_From()创建的实例 ui = UiForm() ui.setupUi(Window) Window.show() sys.exit(app.exec_()) ``` 但是运行提示: ![图片说明](https://img-ask.csdn.net/upload/201811/30/1543567054_511116.png) 求助

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

你以为这样写Java代码很6,但我看不懂

为了提高 Java 编程的技艺,我最近在 GitHub 上学习一些高手编写的代码。下面这一行代码(出自大牛之手)据说可以征服你的朋友,让他们觉得你写的代码很 6,来欣赏一下吧。 IntStream.range(1, 5).boxed().map(i -&gt; { System.out.print("Happy Birthday "); if (i == 3) return "dear NAME"...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

程序员写出这样的代码,能不挨骂吗?

当你换槽填坑时,面对一个新的环境。能够快速熟练,上手实现业务需求是关键。但是,哪些因素会影响你快速上手呢?是原有代码写的不够好?还是注释写的不够好?昨夜...

带了6个月的徒弟当了面试官,而身为高级工程师的我天天修Bug......

即将毕业的应届毕业生一枚,现在只拿到了两家offer,但最近听到一些消息,其中一个offer,我这个组据说客户很少,很有可能整组被裁掉。 想问大家: 如果我刚入职这个组就被裁了怎么办呢? 大家都是什么时候知道自己要被裁了的? 面试软技能指导: BQ/Project/Resume 试听内容: 除了刷题,还有哪些技能是拿到offer不可或缺的要素 如何提升面试软实力:简历, 行为面试,沟通能...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

Python爬虫,高清美图我全都要(彼岸桌面壁纸)

爬取彼岸桌面网站较为简单,用到了requests、lxml、Beautiful Soup4

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

Java岗开发3年,公司临时抽查算法,离职后这几题我记一辈子

前几天我们公司做了一件蠢事,非常非常愚蠢的事情。我原以为从学校出来之后,除了找工作有测试外,不会有任何与考试有关的事儿。 但是,天有不测风云,公司技术总监、人事总监两位大佬突然降临到我们事业线,叫上我老大,给我们组织了一场别开生面的“考试”。 那是一个风和日丽的下午,我翘着二郎腿,左手端着一杯卡布奇诺,右手抓着我的罗技鼠标,滚动着轮轴,穿梭在头条热点之间。 “淡黄的长裙~蓬松的头发...

大牛都会用的IDEA调试技巧!!!

导读 前天面试了一个985高校的实习生,问了他平时用什么开发工具,他想也没想的说IDEA,于是我抛砖引玉的问了一下IDEA的调试用过吧,你说说怎么设置断点...

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

程序员是做全栈工程师好?还是专注一个领域好?

昨天,有位大一的同学私信我,说他要做全栈工程师。 我一听,这不害了孩子么,必须制止啊。 谁知,讲到最后,更确定了他做全栈程序员的梦想。 但凡做全栈工程师的,要么很惨,要么很牛! 但凡很牛的,绝不是一开始就是做全栈的! 全栈工程师听起来好听,但绝没有你想象的那么简单。 今天听我来给你唠,记得帮我点赞哦。 一、全栈工程师的职责 如果你学习编程的目的只是玩玩,那随意,想怎么学怎么学。...

不要再到处使用 === 了

我们知道现在的开发人员都使用 === 来代替 ==,为什么呢?我在网上看到的大多数教程都认为,要预测 JavaScript 强制转换是如何工作这太复杂了,因此建议总是使用===。这些都...

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

MySQL性能优化(五):为什么查询速度这么慢

前期回顾: MySQL性能优化(一):MySQL架构与核心问题 MySQL性能优化(二):选择优化的数据类型 MySQL性能优化(三):深入理解索引的这点事 MySQL性能优化(四):如何高效正确的使用索引 前面章节我们介绍了如何选择优化的数据类型、如何高效的使用索引,这些对于高性能的MySQL来说是必不可少的。但这些还完全不够,还需要合理的设计查询。如果查询写的很糟糕,即使表结构再合理、索引再...

立即提问
相关内容推荐