faster rcnn的train.prototxt 没有data层 100C

最近在用faster rcnn训练数据,需要修改train.prototxt中data层的参数num_classes,但是train.prototxt没有data层,那我该如何修改呢?

1个回答

layer {
name: 'input-data'
type: 'Python'
top: 'data'
top: 'im_info'
top: 'gt_boxes'
python_param {
module: 'roi_data_layer.layer'
layer: 'RoIDataLayer'
param_str: "'num_classes': 21" #########################
}
}
看到没,不是指data层,你找到num_calsses就行了。另外train.prototxt中有两个num_classes要修改

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
faster-rcnn显示 error using fix
恳请各位大神帮忙看看,报错如下, Error using fix Too many input arguments Error in proposal_train(line 86) fix validation data Error in Faster_RCNN_Train.do_proposal_train(line 7) model_stage.output_model_file=proposal_train(conf,dataset,imdb_train,dataset,roidb_train,… Error in script_faster_rcnn__VOC2007_ZF(line 45) model.stae1_rpn =Faster_RCNN_Train.do_proposal_train(conf_proposal,dataset,model,stage1_rpn,opts.do_val);
faster rcnn运行demo出现错误
faster rcnn配置好之后运行 ./tools/demo.py出现如下错误:: Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: Python
win_caffe_py_fast_rcnn训练报错问题。
layer { name: "rpn_bbox_pred" type: "Convolution" bottom: "rpn_conv1Process Process-1: Traceback (most recent call last): File "D:\Anaconda\Anaconda\lib\multiprocessing\process.py", line 267, in _bootstrap self.run() File "D:\Anaconda\Anaconda\lib\multiprocessing\process.py", line 114, in run self._target(*self._args, **self._kwargs) File "D:\py-faster-rcnn\tools\train_faster_rcnn_alt_opt.py", line 129, in train_rpn max_iters=max_iters) File "D:\py-faster-rcnn\tools\..\lib\fast_rcnn\train.py", line 160, in train_net pretrained_model=pretrained_model) File "D:\py-faster-rcnn\tools\..\lib\fast_rcnn\train.py", line 46, in __init__ self.solver = caffe.SGDSolver(solver_prototxt) File "D:\py-faster-rcnn\tools\..\lib\roi_data_layer\layer.py", line 128, in setup top[idx].reshape(1, self._num_classes * 4) IndexError: Index out of range I0415 19:38:05.625026 12668 layer_factory.cpp:58] Creating layer input-data I0415 19:38:05.682178 12668 net.cpp:84] Creating Layer input-data I0415 19:38:05.682178 12668 net.cpp:380] input-data -> data I0415 19:38:05.682178 12668 net.cpp:380] input-data -> im_info I0415 19:38:05.682178 12668 net.cpp:380] input-data -> gt_boxes 然后就卡在这里了。
window faster rcnn error -sed=c99.
F:\Caffe-cp\faster_rcnn\py-faster-rcnn\lib>python setup.py install home = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5 running install running bdist_egg running egg_info writing fast_rcnn.egg-info\PKG-INFO writing top-level names to fast_rcnn.egg-info\top_level.txt writing dependency_links to fast_rcnn.egg-info\dependency_links.txt reading manifest file 'fast_rcnn.egg-info\SOURCES.txt' writing manifest file 'fast_rcnn.egg-info\SOURCES.txt' installing library code to build\bdist.win-amd64\egg running install_lib running build_ext skipping 'utils\bbox.c' Cython extension (up-to-date) skipping 'nms\cpu_nms.c' Cython extension (up-to-date) skipping 'pycocotools\_mask.c' Cython extension (up-to-date) building 'pycocotools._mask' extension C:\Users\Joker\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IE:\anaconda2\anaconda\lib\site-packages\numpy\core\include -Ipycocotools -IE:\anaconda2\anaconda\include -IE:\anaconda2\anaconda\PC /Tcpycocotools\maskApi.c /Fobuild\temp.win-amd64-2.7\Release\pycocotools\maskApi.obj -std=c99 cl : Command line warning D9002 : ignoring unknown option '-std=c99' maskApi.c f:\caffe-cp\faster_rcnn\py-faster-rcnn\lib\pycocotools\maskApi.h(8) : fatal error C1083: Cannot open include file: 'stdbool.h': No such file or directory error: command 'C:\\Users\\Joker\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\amd64\\cl.exe' failed with exit status 2
在windows环境下使用自己的数据集跑faster rcnn遇到'NoneType' object is subscriptable问题(tensorflow)
在尝试根据网上教程用自己的数据集跑faster rcnn的时候,会跳出'NoneType' object is not subscriptable的问题,但查了网上说明并没有明白错误的原因......请问要如何解决?谢谢! ![图片说明](https://img-ask.csdn.net/upload/201910/20/1571582480_395221.png)
faster rcnn训练的时候应该是哪个层出了问题
+ echo Logging output to experiments/logs/faster_rcnn_alt_opt_ZF_.txt.2017-04-19_01-16-47 Logging output to experiments/logs/faster_rcnn_alt_opt_ZF_.txt.2017-04-19_01-16-47 + ./tools/train_faster_rcnn_alt_opt.py --gpu 0 --net_name ZF --weights data/imagenet_models/CaffeNet.v2.caffemodel --imdb voc_2007_trainval --cfg experiments/cfgs/faster_rcnn_alt_opt.yml Called with args: Namespace(cfg_file='experiments/cfgs/faster_rcnn_alt_opt.yml', gpu_id=0, imdb_name='voc_2007_trainval', net_name='ZF', pretrained_model='data/imagenet_models/CaffeNet.v2.caffemodel', set_cfgs=None) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Stage 1 RPN, init from ImageNet model ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Init model: data/imagenet_models/CaffeNet.v2.caffemodel Using config: {'DATA_DIR': 'E:\\caffe-frcnn\\py-faster-rcnn-master\\data', 'DEDUP_BOXES': 0.0625, 'EPS': 1e-14, 'EXP_DIR': 'default', 'GPU_ID': 0, 'MATLAB': 'matlab', 'MODELS_DIR': 'E:\\caffe-frcnn\\py-faster-rcnn-master\\models\\pascal_voc', 'PIXEL_MEANS': array([[[ 102.9801, 115.9465, 122.7717]]]), 'RNG_SEED': 3, 'ROOT_DIR': 'E:\\caffe-frcnn\\py-faster-rcnn-master', 'TEST': {'BBOX_REG': True, 'HAS_RPN': False, 'MAX_SIZE': 1000, 'NMS': 0.3, 'PROPOSAL_METHOD': 'selective_search', 'RPN_MIN_SIZE': 16, 'RPN_NMS_THRESH': 0.7, 'RPN_POST_NMS_TOP_N': 300, 'RPN_PRE_NMS_TOP_N': 6000, 'SCALES': [600], 'SVM': False}, 'TRAIN': {'ASPECT_GROUPING': True, 'BATCH_SIZE': 128, 'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0], 'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0], 'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2], 'BBOX_NORMALIZE_TARGETS': True, 'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': False, 'BBOX_REG': False, 'BBOX_THRESH': 0.5, 'BG_THRESH_HI': 0.5, 'BG_THRESH_LO': 0.1, 'FG_FRACTION': 0.25, 'FG_THRESH': 0.5, 'HAS_RPN': True, 'IMS_PER_BATCH': 1, 'MAX_SIZE': 1000, 'PROPOSAL_METHOD': 'gt', 'RPN_BATCHSIZE': 256, 'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0], 'RPN_CLOBBER_POSITIVES': False, 'RPN_FG_FRACTION': 0.5, 'RPN_MIN_SIZE': 16, 'RPN_NEGATIVE_OVERLAP': 0.3, 'RPN_NMS_THRESH': 0.7, 'RPN_POSITIVE_OVERLAP': 0.7, 'RPN_POSITIVE_WEIGHT': -1.0, 'RPN_POST_NMS_TOP_N': 2000, 'RPN_PRE_NMS_TOP_N': 12000, 'SCALES': [600], 'SNAPSHOT_INFIX': '', 'SNAPSHOT_ITERS': 10000, 'USE_FLIPPED': True, 'USE_PREFETCH': False}, 'USE_GPU_NMS': True} Loaded dataset `voc_2007_trainval` for training Set proposal method: gt Appending horizontally-flipped training examples... voc_2007_trainval gt roidb loaded from E:\caffe-frcnn\py-faster-rcnn-master\data\cache\voc_2007_trainval_gt_roidb.pkl done Preparing training data... done roidb len: 100 Output will be saved to `E:\caffe-frcnn\py-faster-rcnn-master\output\default\voc_2007_trainval` Filtered 0 roidb entries: 100 -> 100 WARNING: Logging before InitGoogleLogging() is written to STDERR I0419 01:16:54.964942 25240 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead. I0419 01:16:55.073168 25240 solver.cpp:44] Initializing solver from parameters: train_net: "models/pascal_voc/ZF/faster_rcnn_alt_opt/stage1_rpn_train.pt" base_lr: 0.001 display: 20 lr_policy: "step" gamma: 0.1 momentum: 0.9 weight_decay: 0.0005 stepsize: 60000 snapshot: 0 snapshot_prefix: "zf_rpn" average_loss: 100 I0419 01:16:55.073168 25240 solver.cpp:77] Creating training net from train_net file: models/pascal_voc/ZF/faster_rcnn_alt_opt/stage1_rpn_train.pt I0419 01:16:55.074168 25240 net.cpp:51] Initializing net from parameters: name: "ZF" state { phase: TRAIN } layer { name: "input-data" type: "Python" top: "data" top: "im_info" top: "gt_boxes" python_param { module: "roi_data_layer.layer" layer: "RoIDataLayer" param_str: "\'num_classes\': 2" } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 96 pad: 3 kernel_size: 7 stride: 2 } } layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "norm1" type: "LRN" bottom: "conv1" top: "norm1" lrn_param { local_size: 3 alpha: 5e-05 beta: 0.75 norm_region: WITHIN_CHANNEL engine: CAFFE } } layer { name: "pool1" type: "Pooling" bottom: "norm1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 pad: 1 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 256 pad: 2 kernel_size: 5 stride: 2 } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "norm2" type: "LRN" bottom: "conv2" top: "norm2" lrn_param { local_size: 3 alpha: 5e-05 beta: 0.75 norm_region: WITHIN_CHANNEL engine: CAFFE } } layer { name: "pool2" type: "Pooling" bottom: "norm2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 pad: 1 } } layer { name: "conv3" type: "Convolution" bottom: "pool2" top: "conv3" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 stride: 1 } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 stride: 1 } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 stride: 1 } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "rpn_conv1" type: "Convolution" bottom: "conv5" top: "rpn_conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "rpn_relu1" type: "ReLU" bottom: "rpn_conv1" top: "rpn_conv1" } layer { name: "rpn_cls_score" type: "Convolution" bottom: "rpn_conv1" top: "rpn_cls_score" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 18 pad: 0 kernel_size: 1 stride: 1 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "rpn_bbox_pred" type: "Convolution" bottom: "rpn_conv1"RoiDataLayer: name_to_top: {'gt_boxes': 2, 'data': 0, 'im_info': 1} top: "rpn_bbox_pred" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 36 pad: 0 kernel_size: 1 stride: 1 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "rpn_cls_score_reshape" type: "Reshape" bottom: "rpn_cls_score" top: "rpn_cls_score_reshape" reshape_param { shape { dim: 0 dim: 2 dim: -1 dim: 0 } } } layer { name: "rpn-data" type: "Python" bottom: "rpn_cls_score" bottom: "gt_boxes" bottom: "im_info" bottom: "data" top: "rpn_labels" top: "rpn_bbox_targets" top: "rpn_bbox_inside_weights" top: "rpn_bbox_outside_weights" python_param { module: "rpn.anchor_target_layer" layer: "AnchorTargetLayer" param_str: "\'feat_stride\': 16" } } layer { name: "rpn_loss_cls" type: "SoftmaxWithLoss" bottom: "rpn_cls_score_reshape" bottom: "rpn_labels" top: "rpn_cls_loss" loss_weight: 1 propagate_down: true propagate_down: false loss_param { ignore_label: -1 normalize: true } } layer { name: "rpn_loss_bbox" type: "SmoothL1Loss" bottom: "rpn_bbox_pred" bottom: "rpn_bbox_targets" bottom: "rpn_bbox_inside_weights" bottom: "rpn_bbox_outside_weights" top: "rpn_loss_bbox" loss_weight: 1 smooth_l1_loss_param { sigma: 3 } } layer { name: "dummy_roi_pool_conv5" type: "DummyData" top: "dummy_roi_pool_conv5" dummy_data_param { data_filler { type: "gaussian" std: 0.01 } shape { dim: 1 dim: 9216 } } } layer { name: "fc6" type: "InnerProduct" bottom: "dummy_roi_pool_conv5" top: "fc6" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } inner_product_param { num_output: 4096 } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } inner_product_param { num_output: 4096 } } layer { name: "silence_fc7" type: "Silence" bottom: "fc7" } I0419 01:16:55.074668 25240 layer_factory.cpp:58] Creating layer input-data I0419 01:16:55.109673 25240 net.cpp:84] Creating Layer input-data I0419 01:16:55.109673 25240 net.cpp:380] input-data -> data I0419 01:16:55.109673 25240 net.cpp:380] input-data -> im_info I0419 01:16:55.109673 25240 net.cpp:380] input-data -> gt_boxes I0419 01:16:55.111171 25240 net.cpp:122] Setting up input-data I0419 01:16:55.111171 25240 net.cpp:129] Top shape: 1 3 600 1000 (1800000) I0419 01:16:55.111171 25240 net.cpp:129] Top shape: 1 3 (3) I0419 01:16:55.111668 25240 net.cpp:129] Top shape: 1 4 (4) I0419 01:16:55.111668 25240 net.cpp:137] Memory required for data: 7200028 I0419 01:16:55.111668 25240 layer_factory.cpp:58] Creating layer data_input-data_0_split I0419 01:16:55.111668 25240 net.cpp:84] Creating Layer data_input-data_0_split I0419 01:16:55.111668 25240 net.cpp:406] data_input-data_0_split <- data I0419 01:16:55.111668 25240 net.cpp:380] data_input-data_0_split -> data_input-data_0_split_0 I0419 01:16:55.111668 25240 net.cpp:380] data_input-data_0_split -> data_input-data_0_split_1 I0419 01:16:55.111668 25240 net.cpp:122] Setting up data_input-data_0_split I0419 01:16:55.111668 25240 net.cpp:129] Top shape: 1 3 600 1000 (1800000) I0419 01:16:55.111668 25240 net.cpp:129] Top shape: 1 3 600 1000 (1800000) I0419 01:16:55.111668 25240 net.cpp:137] Memory required for data: 21600028 I0419 01:16:55.111668 25240 layer_factory.cpp:58] Creating layer conv1 I0419 01:16:55.111668 25240 net.cpp:84] Creating Layer conv1 I0419 01:16:55.111668 25240 net.cpp:406] conv1 <- data_input-data_0_split_0 I0419 01:16:55.111668 25240 net.cpp:380] conv1 -> conv1 I0419 01:16:55.577394 25240 net.cpp:122] Setting up conv1 I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 300 500 (14400000) I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 79200028 I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer relu1 I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer relu1 I0419 01:16:55.577394 25240 net.cpp:406] relu1 <- conv1 I0419 01:16:55.577394 25240 net.cpp:367] relu1 -> conv1 (in-place) I0419 01:16:55.577394 25240 net.cpp:122] Setting up relu1 I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 300 500 (14400000) I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 136800028 I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer norm1 I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer norm1 I0419 01:16:55.577394 25240 net.cpp:406] norm1 <- conv1 I0419 01:16:55.577394 25240 net.cpp:380] norm1 -> norm1 I0419 01:16:55.577394 25240 net.cpp:122] Setting up norm1 I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 300 500 (14400000) I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 194400028 I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer pool1 I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer pool1 I0419 01:16:55.577394 25240 net.cpp:406] pool1 <- norm1 I0419 01:16:55.577394 25240 net.cpp:380] pool1 -> pool1 I0419 01:16:55.577394 25240 net.cpp:122] Setting up pool1 I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 151 251 (3638496) I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 208954012 I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer conv2 I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer conv2 I0419 01:16:55.577394 25240 net.cpp:406] conv2 <- pool1 I0419 01:16:55.577394 25240 net.cpp:380] conv2 -> conv2 I0419 01:16:55.593016 25240 net.cpp:122] Setting up conv2 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 76 126 (2451456) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 218759836 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer relu2 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer relu2 I0419 01:16:55.593016 25240 net.cpp:406] relu2 <- conv2 I0419 01:16:55.593016 25240 net.cpp:367] relu2 -> conv2 (in-place) I0419 01:16:55.593016 25240 net.cpp:122] Setting up relu2 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 76 126 (2451456) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 228565660 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer norm2 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer norm2 I0419 01:16:55.593016 25240 net.cpp:406] norm2 <- conv2 I0419 01:16:55.593016 25240 net.cpp:380] norm2 -> norm2 I0419 01:16:55.593016 25240 net.cpp:122] Setting up norm2 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 76 126 (2451456) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 238371484 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer pool2 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer pool2 I0419 01:16:55.593016 25240 net.cpp:406] pool2 <- norm2 I0419 01:16:55.593016 25240 net.cpp:380] pool2 -> pool2 I0419 01:16:55.593016 25240 net.cpp:122] Setting up pool2 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 240927388 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer conv3 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer conv3 I0419 01:16:55.593016 25240 net.cpp:406] conv3 <- pool2 I0419 01:16:55.593016 25240 net.cpp:380] conv3 -> conv3 I0419 01:16:55.593016 25240 net.cpp:122] Setting up conv3 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 244761244 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer relu3 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer relu3 I0419 01:16:55.593016 25240 net.cpp:406] relu3 <- conv3 I0419 01:16:55.593016 25240 net.cpp:367] relu3 -> conv3 (in-place) I0419 01:16:55.593016 25240 net.cpp:122] Setting up relu3 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 248595100 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer conv4 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer conv4 I0419 01:16:55.593016 25240 net.cpp:406] conv4 <- conv3 I0419 01:16:55.593016 25240 net.cpp:380] conv4 -> conv4 I0419 01:16:55.593016 25240 net.cpp:122] Setting up conv4 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 252428956 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer relu4 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer relu4 I0419 01:16:55.593016 25240 net.cpp:406] relu4 <- conv4 I0419 01:16:55.593016 25240 net.cpp:367] relu4 -> conv4 (in-place) I0419 01:16:55.593016 25240 net.cpp:122] Setting up relu4 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 256262812 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer conv5 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer conv5 I0419 01:16:55.593016 25240 net.cpp:406] conv5 <- conv4 I0419 01:16:55.593016 25240 net.cpp:380] conv5 -> conv5 I0419 01:16:55.608644 25240 net.cpp:122] Setting up conv5 I0419 01:16:55.608644 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.608644 25240 net.cpp:137] Memory required for data: 258818716 I0419 01:16:55.608644 25240 layer_factory.cpp:58] Creating layer relu5 I0419 01:16:55.608644 25240 net.cpp:84] Creating Layer relu5 I0419 01:16:55.608644 25240 net.cpp:406] relu5 <- conv5 I0419 01:16:55.608644 25240 net.cpp:367] relu5 -> conv5 (in-place) I0419 01:16:55.608644 25240 net.cpp:122] Setting up relu5 I0419 01:16:55.608644 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.608644 25240 net.cpp:137] Memory required for data: 261374620 I0419 01:16:55.608644 25240 layer_factory.cpp:58] Creating layer rpn_conv1 I0419 01:16:55.608644 25240 net.cpp:84] Creating Layer rpn_conv1 I0419 01:16:55.608644 25240 net.cpp:406] rpn_conv1 <- conv5 I0419 01:16:55.608644 25240 net.cpp:380] rpn_conv1 -> rpn_conv1 I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_conv1 I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 263930524 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_relu1 I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_relu1 I0419 01:16:55.624267 25240 net.cpp:406] rpn_relu1 <- rpn_conv1 I0419 01:16:55.624267 25240 net.cpp:367] rpn_relu1 -> rpn_conv1 (in-place) I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_relu1 I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 266486428 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_conv1_rpn_relu1_0_split I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_conv1_rpn_relu1_0_split I0419 01:16:55.624267 25240 net.cpp:406] rpn_conv1_rpn_relu1_0_split <- rpn_conv1 I0419 01:16:55.624267 25240 net.cpp:380] rpn_conv1_rpn_relu1_0_split -> rpn_conv1_rpn_relu1_0_split_0 I0419 01:16:55.624267 25240 net.cpp:380] rpn_conv1_rpn_relu1_0_split -> rpn_conv1_rpn_relu1_0_split_1 I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_conv1_rpn_relu1_0_split I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 271598236 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_cls_score I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_cls_score I0419 01:16:55.624267 25240 net.cpp:406] rpn_cls_score <- rpn_conv1_rpn_relu1_0_split_0 I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score -> rpn_cls_score I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_cls_score I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 18 39 64 (44928) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 271777948 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_cls_score_rpn_cls_score_0_split I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_cls_score_rpn_cls_score_0_split I0419 01:16:55.624267 25240 net.cpp:406] rpn_cls_score_rpn_cls_score_0_split <- rpn_cls_score I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score_rpn_cls_score_0_split -> rpn_cls_score_rpn_cls_score_0_split_0 I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score_rpn_cls_score_0_split -> rpn_cls_score_rpn_cls_score_0_split_1 I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_cls_score_rpn_cls_score_0_split I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 18 39 64 (44928) I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 18 39 64 (44928) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 272137372 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_bbox_pred I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_bbox_pred I0419 01:16:55.624267 25240 net.cpp:406] rpn_bbox_pred <- rpn_conv1_rpn_relu1_0_split_1 I0419 01:16:55.624267 25240 net.cpp:380] rpn_bbox_pred -> rpn_bbox_pred I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_bbox_pred I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 36 39 64 (89856) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 272496796 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_cls_score_reshape I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_cls_score_reshape I0419 01:16:55.624267 25240 net.cpp:406] rpn_cls_score_reshape <- rpn_cls_score_rpn_cls_score_0_split_0 I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score_reshape -> rpn_cls_score_reshape I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_cls_score_reshape I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 2 351 64 (44928) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 272676508 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn-data I0419 01:16:55.639891 25240 net.cpp:84] Creating Layer rpn-data I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- rpn_cls_score_rpn_cls_score_0_split_1 I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- gt_boxes I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- im_info I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- data_input-data_0_split_1 I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_labels I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_bbox_targets I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_bbox_inside_weights I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_bbox_outside_weights I0419 01:16:55.639891 25240 net.cpp:122] Setting up rpn-data I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 1 351 64 (22464) I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 36 39 64 (89856) I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 36 39 64 (89856) I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 36 39 64 (89856) I0419 01:16:55.639891 25240 net.cpp:137] Memory required for data: 273844636 I0419 01:16:55.639891 25240 layer_factory.cpp:58] Creating layer rpn_loss_cls I0419 01:16:55.639891 25240 net.cpp:84] Creating Layer rpn_loss_cls I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_cls <- rpn_cls_score_reshape I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_cls <- rpn_labels I0419 01:16:55.639891 25240 net.cpp:380] rpn_loss_cls -> rpn_cls_loss I0419 01:16:55.639891 25240 layer_factory.cpp:58] Creating layer rpn_loss_cls I0419 01:16:55.639891 25240 net.cpp:122] Setting up rpn_loss_cls I0419 01:16:55.639891 25240 net.cpp:129] Top shape: (1) I0419 01:16:55.639891 25240 net.cpp:132] with loss weight 1 I0419 01:16:55.639891 25240 net.cpp:137] Memory required for data: 273844640 I0419 01:16:55.639891 25240 layer_factory.cpp:58] Creating layer rpn_loss_bbox I0419 01:16:55.639891 25240 net.cpp:84] Creating Layer rpn_loss_bbox I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_bbox <- rpn_bbox_pred I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_bbox <- rpn_bbox_targets I0419 01:16:55.639891 2*** Check failure stack trace: ***
Faster-RCNN-TensorFlow-Python3-master训练后,如何得到AP,mAP的结果
查了很多资料,tf-faster-rcnn和caffe-faster-rcnn里都是用test__net.py 来评估训练结果。但是我用的是Faster-RCNN-TensorFlow-Python3-master,里面没有test_net.py。那要怎么获得AP和mAP的结果呢?
faster rcnn执行代码中的fix layers 是什么意思?
![图片说明](https://img-ask.csdn.net/upload/201810/10/1539171355_528750.png) 如图,restored的不应该是通过预训练后得到的参数吗?为什么冒号后面是0??还有最后的Fix是什么意思?意思是这几层在faster rcnn 训练时参数都不变了吗?
ubuntu跑faster_rcnn的demo不出界面
ubuntu跑faster_rcnn的demo不出界面,但是也没有报错,就是不出界面是怎么回事? ubuntu 16 cuda 8.0
faster-rcnn 关于预训练的问题
faster -rcnn 训练分成两步: 1. per-train 采用Image Net的数据集(1000类,一千万张图片) 2. fine-tuning 采用pascal voc _ 2007 (20类,一万张图片) 或其他数据集 问题: 如果我想训练另外一个数据集,例如做细胞检测,大概有三类,可不可以直接用第一步per-train的模型来进行初始化参数?如果不可以,大概需要多少张细胞图像做预训练?
faster-RCNN 分类层
请问RPN分类层cls_score为什么要输出两个参数:每个anchor的前景概率和背景概率?背景概率不就等于(1-前景概率)吗?
如何可视化tensorflow版的fater rcnn的训练过程?
小白一枚,github下的faster rcnn tf版在win10系统上,代码里没有输出训练日志的语句无log文件, 想问一下怎么添加语句可以通过tensorboard显示?下为train代码 import time import tensorflow as tf import numpy as np from tensorflow.python import pywrap_tensorflow import lib.config.config as cfg from lib.datasets import roidb as rdl_roidb from lib.datasets.factory import get_imdb from lib.datasets.imdb import imdb as imdb2 from lib.layer_utils.roi_data_layer import RoIDataLayer from lib.nets.vgg16 import vgg16 from lib.utils.timer import Timer try: import cPickle as pickle except ImportError: import pickle import os def get_training_roidb(imdb): """Returns a roidb (Region of Interest database) for use in training.""" if True: print('Appending horizontally-flipped training examples...') imdb.append_flipped_images() print('done') print('Preparing training data...') rdl_roidb.prepare_roidb(imdb) print('done') return imdb.roidb def combined_roidb(imdb_names): """ Combine multiple roidbs """ def get_roidb(imdb_name): imdb = get_imdb(imdb_name) print('Loaded dataset `{:s}` for training'.format(imdb.name)) imdb.set_proposal_method("gt") print('Set proposal method: {:s}'.format("gt")) roidb = get_training_roidb(imdb) return roidb roidbs = [get_roidb(s) for s in imdb_names.split('+')] roidb = roidbs[0] if len(roidbs) > 1: for r in roidbs[1:]: roidb.extend(r) tmp = get_imdb(imdb_names.split('+')[1]) imdb = imdb2(imdb_names, tmp.classes) else: imdb = get_imdb(imdb_names) return imdb, roidb class Train: def __init__(self): # Create network if cfg.FLAGS.net == 'vgg16': self.net = vgg16(batch_size=cfg.FLAGS.ims_per_batch) else: raise NotImplementedError self.imdb, self.roidb = combined_roidb("voc_2007_trainval") self.data_layer = RoIDataLayer(self.roidb, self.imdb.num_classes) self.output_dir = cfg.get_output_dir(self.imdb, 'default') def train(self): # Create session tfconfig = tf.ConfigProto(allow_soft_placement=True) tfconfig.gpu_options.allow_growth = True sess = tf.Session(config=tfconfig) with sess.graph.as_default(): tf.set_random_seed(cfg.FLAGS.rng_seed) layers = self.net.create_architecture(sess, "TRAIN", self.imdb.num_classes, tag='default') loss = layers['total_loss'] lr = tf.Variable(cfg.FLAGS.learning_rate, trainable=False) momentum = cfg.FLAGS.momentum optimizer = tf.train.MomentumOptimizer(lr, momentum) gvs = optimizer.compute_gradients(loss) # Double bias # Double the gradient of the bias if set if cfg.FLAGS.double_bias: final_gvs = [] with tf.variable_scope('Gradient_Mult'): for grad, var in gvs: scale = 1. if cfg.FLAGS.double_bias and '/biases:' in var.name: scale *= 2. if not np.allclose(scale, 1.0): grad = tf.multiply(grad, scale) final_gvs.append((grad, var)) train_op = optimizer.apply_gradients(final_gvs) else: train_op = optimizer.apply_gradients(gvs) # We will handle the snapshots ourselves self.saver = tf.train.Saver(max_to_keep=100000) # Write the train and validation information to tensorboard # writer = tf.summary.FileWriter(self.tbdir, sess.graph) # valwriter = tf.summary.FileWriter(self.tbvaldir) # Load weights # Fresh train directly from ImageNet weights print('Loading initial model weights from {:s}'.format(cfg.FLAGS.pretrained_model)) variables = tf.global_variables() # Initialize all variables first sess.run(tf.variables_initializer(variables, name='init')) var_keep_dic = self.get_variables_in_checkpoint_file(cfg.FLAGS.pretrained_model) # Get the variables to restore, ignorizing the variables to fix variables_to_restore = self.net.get_variables_to_restore(variables, var_keep_dic) restorer = tf.train.Saver(variables_to_restore) restorer.restore(sess, cfg.FLAGS.pretrained_model) print('Loaded.') # Need to fix the variables before loading, so that the RGB weights are changed to BGR # For VGG16 it also changes the convolutional weights fc6 and fc7 to # fully connected weights self.net.fix_variables(sess, cfg.FLAGS.pretrained_model) print('Fixed.') sess.run(tf.assign(lr, cfg.FLAGS.learning_rate)) last_snapshot_iter = 0 timer = Timer() iter = last_snapshot_iter + 1 last_summary_time = time.time() while iter < cfg.FLAGS.max_iters + 1: # Learning rate if iter == cfg.FLAGS.step_size + 1: # Add snapshot here before reducing the learning rate # self.snapshot(sess, iter) sess.run(tf.assign(lr, cfg.FLAGS.learning_rate * cfg.FLAGS.gamma)) timer.tic() # Get training data, one batch at a time blobs = self.data_layer.forward() # Compute the graph without summary rpn_loss_cls, rpn_loss_box, loss_cls, loss_box, total_loss = self.net.train_step(sess, blobs, train_op) timer.toc() iter += 1 # Display training information if iter % (cfg.FLAGS.display) == 0: print('iter: %d / %d, total loss: %.6f\n >>> rpn_loss_cls: %.6f\n ' '>>> rpn_loss_box: %.6f\n >>> loss_cls: %.6f\n >>> loss_box: %.6f\n ' % \ (iter, cfg.FLAGS.max_iters, total_loss, rpn_loss_cls, rpn_loss_box, loss_cls, loss_box)) print('speed: {:.3f}s / iter'.format(timer.average_time)) if iter % cfg.FLAGS.snapshot_iterations == 0: self.snapshot(sess, iter ) def get_variables_in_checkpoint_file(self, file_name): try: reader = pywrap_tensorflow.NewCheckpointReader(file_name) var_to_shape_map = reader.get_variable_to_shape_map() return var_to_shape_map except Exception as e: # pylint: disable=broad-except print(str(e)) if "corrupted compressed block contents" in str(e): print("It's likely that your checkpoint file has been compressed " "with SNAPPY.") def snapshot(self, sess, iter): net = self.net if not os.path.exists(self.output_dir): os.makedirs(self.output_dir) # Store the model snapshot filename = 'vgg16_faster_rcnn_iter_{:d}'.format(iter) + '.ckpt' filename = os.path.join(self.output_dir, filename) self.saver.save(sess, filename) print('Wrote snapshot to: {:s}'.format(filename)) # Also store some meta information, random state, etc. nfilename = 'vgg16_faster_rcnn_iter_{:d}'.format(iter) + '.pkl' nfilename = os.path.join(self.output_dir, nfilename) # current state of numpy random st0 = np.random.get_state() # current position in the database cur = self.data_layer._cur # current shuffled indeces of the database perm = self.data_layer._perm # Dump the meta info with open(nfilename, 'wb') as fid: pickle.dump(st0, fid, pickle.HIGHEST_PROTOCOL) pickle.dump(cur, fid, pickle.HIGHEST_PROTOCOL) pickle.dump(perm, fid, pickle.HIGHEST_PROTOCOL) pickle.dump(iter, fid, pickle.HIGHEST_PROTOCOL) return filename, nfilename if __name__ == '__main__': train = Train() train.train()
faster rcnn训练自己的数据集 测试结果一片红色
搞了好几天才训练出来的网络,测试时输入是黑白图片,结果是一片红色,但是标注的位置大致是对的,请问这是怎么回事?是图片的问题吗?
slim微调后的模型可以用在tf-faster rcnn上进行细粒度测试吗?
这是用在tf-faster rcnn上的错误 ``` Traceback (most recent call last): File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call return fn(*args) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.NotFoundError: Key resnet_v1_101/bbox_pred/biases not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "../tools/demo.py", line 189, in <module> print(saver.restore(sess,tfmodel)) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1768, in restore six.reraise(exception_type, exception_value, exception_traceback) File "/home/lf/anaconda3/lib/python3.6/site-packages/six.py", line 693, in reraise raise value File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1752, in restore {self.saver_def.filename_tensor_name: save_path}) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run run_metadata_ptr) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run run_metadata) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.NotFoundError: Key resnet_v1_101/bbox_pred/biases not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]] Caused by op 'save/RestoreV2', defined at: File "../tools/demo.py", line 187, in <module> saver = tf.train.Saver() File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1284, in __init__ self.build() File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1296, in build self._build(self._filename, build_save=True, build_restore=True) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1333, in _build build_save=build_save, build_restore=build_restore) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 781, in _build_internal restore_sequentially, reshape) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 400, in _AddRestoreOps restore_sequentially) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 832, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1463, in restore_v2 shape_and_slices=shape_and_slices, dtypes=dtypes, name=name) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op op_def=op_def) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1740, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access NotFoundError (see above for traceback): Key resnet_v1_101/bbox_pred/biases not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]] ```
在win10系统中使用python运行faster-rcnn训练自己的数据集出现问题
在win10系统中使用python运行faster-rcnn训练自己的数据集出现以下问题: I0417 16:38:45.682274 7396 layer_factory.hpp:77] Creating layer rpn_cls_score_rpn_cls_score*** Check failure stack trace: *** 请问该问题产生的原因有可能是哪些?应该怎么解决?
Network 是怎么编写的
Problem Description The ALPC company is now working on his own network system, which is connecting all N ALPC department. To economize on spending, the backbone network has only one router for each department, and N-1 optical fiber in total to connect all routers. The usual way to measure connecting speed is lag, or network latency, referring the time taken for a sent packet of data to be received at the other end. Now the network is on trial, and new photonic crystal fibers designed by ALPC42 is trying out, the lag on fibers can be ignored. That means, lag happened when message transport through the router. ALPC42 is trying to change routers to make the network faster, now he want to know that, which router, in any exactly time, between any pair of nodes, the K-th high latency is. He needs your help. Input There are only one test case in input file. Your program is able to get the information of N routers and N-1 fiber connections from input, and Q questions for two condition: 1. For some reason, the latency of one router changed. 2. Querying the K-th longest lag router between two routers. For each data case, two integers N and Q for first line. 0<=N<=80000, 0<=Q<=30000. Then n integers in second line refer to the latency of each router in the very beginning. Then N-1 lines followed, contains two integers x and y for each, telling there is a fiber connect router x and router y. Then q lines followed to describe questions, three numbers k, a, b for each line. If k=0, Telling the latency of router a, Ta changed to b; if k>0, asking the latency of the k-th longest lag router between a and b (include router a and b). 0<=b<100000000. A blank line follows after each case. Output For each question k>0, print a line to answer the latency time. Once there are less than k routers in the way, print "invalid request!" instead. Sample Input 5 5 5 1 2 3 4 3 1 2 1 4 3 5 3 2 4 5 0 1 2 2 2 3 2 1 4 3 3 5 Sample Output 3 2 2 invalid request!
用faster rcnn来训练自己的数据集,在进行配置接口时出现如下错误
![图片说明](https://img-ask.csdn.net/upload/201704/16/1492329858_521128.png) 使用的是pascal 格式的数据集。 who can tell me !这个问题真是够折磨的
Moving Points 移动点的问题
Problem Description Consider a number of Target points in a plane. Each Target point moves in a straight line at a constant speed, and do not change direction. Now, consider a Chaser point that starts at the origin, and moves at a speed faster than any of the Target points. The Chaser point moves at a constant speed, but it is capable of changing direction at will. It will ‘Catch’ a Target point, and then move from there to catch another Target point, and so on. Given the parameters of the Chaser point and the Target points, what is the least amount of time it takes the Chaser point to catch all of the Target points? ‘Catch’ simply means that the Catcher and the Target occupy the same point in the plane at the same time. This can be instantaneous; there’s no need for the Catcher to stay with the Target for any non-zero length of time. Input There will be several test cases in the input. Each test case will begin with two integers N C Where N (1 ≤ N ≤ 15) is the number of Target points, and C (0 < C ≤ 1,000) is the speed of the Chaser point. Each of the next N lines will have four integers, describing a Target point: X Y D S Where (X,Y) is the location in the plane (-1000 ≤ X,Y ≤ 1,000) of that Target point at time 0, D (0 ≤ D < 360) is the direction of movement in Degrees (0 degrees is the positive X axis, 90 degrees is the positive Y axis), and S (0 ≤ S < C) is the speed of that Target point. It is assumed that all Target points start moving immediately at time 0. The input will end with a line with two 0s. Output For each test case, output a single real number on its own line, representing the least amount of time needed for the Chaser point to catch all of the Target points. Print this number to exactly 2 decimal places, rounded. Output no extra spaces, and do not separate answers with blank lines. Sample Input 2 25 19 19 32 10 6 45 133 19 5 10 10 20 45 3 30 10 135 4 100 100 219 5 10 100 301 4 30 30 5 3 0 0 Sample Output 12.62 12.54
faster-rcnn的bounding boxes是否可以改进啊
传统的bounding boxes是水平的,也就是正方形,(x y w h)怎么做成有方向的oriented bounding boxes (x1 y1 x2 y2 x3 y3 x4 y4)。 或者有没有这种有方向的bounding boxes目标检测算法,求大佬解答 类似下图 ![图片说明](https://img-ask.csdn.net/upload/201803/10/1520641104_790494.jpg)
相见恨晚的超实用网站
搞学习 知乎:www.zhihu.com 简答题:http://www.jiandati.com/ 网易公开课:https://open.163.com/ted/ 网易云课堂:https://study.163.com/ 中国大学MOOC:www.icourse163.org 网易云课堂:study.163.com 哔哩哔哩弹幕网:www.bilibili.com 我要自学网:www.51zxw
花了20分钟,给女朋友们写了一个web版群聊程序
参考博客 [1]https://www.byteslounge.com/tutorials/java-ee-html5-websocket-example
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
这30个CSS选择器,你必须熟记(上)
关注前端达人,与你共同进步CSS的魅力就是让我们前端工程师像设计师一样进行网页的设计,我们能轻而易举的改变颜色、布局、制作出漂亮的影音效果等等,我们只需要改几行代码,不需...
国产开源API网关项目进入Apache孵化器:APISIX
点击蓝色“程序猿DD”关注我回复“资源”获取独家整理的学习资料!近日,又有一个开源项目加入了这个Java开源界大名鼎鼎的Apache基金会,开始进行孵化器。项目名称:AP...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
编写Spring MVC控制器的14个技巧
本期目录 1.使用@Controller构造型 2.实现控制器接口 3.扩展AbstractController类 4.为处理程序方法指定URL映射 5.为处理程序方法指定HTTP请求方法 6.将请求参数映射到处理程序方法 7.返回模型和视图 8.将对象放入模型 9.处理程序方法中的重定向 10.处理表格提交和表格验证 11.处理文件上传 12.在控制器中自动装配业务类 ...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
求小姐姐抠图竟遭白眼?痛定思痛,我决定用 Python 自力更生!
点击蓝色“Python空间”关注我丫加个“星标”,每天一起快乐的学习大家好,我是 Rocky0429,一个刚恰完午饭,正在用刷网页浪费生命的蒟蒻...一堆堆无聊八卦信息的网页内容慢慢使我的双眼模糊,一个哈欠打出了三斤老泪,就在此时我看到了一张图片:是谁!是谁把我女朋友的照片放出来的!awsl!太好看了叭...等等,那个背景上的一堆鬼画符是什么鬼?!真是看不下去!叔叔婶婶能忍,隔壁老王的三姨妈的四表...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
相关热词 c#选择结构应用基本算法 c# 收到udp包后回包 c#oracle 头文件 c# 序列化对象 自定义 c# tcp 心跳 c# ice连接服务端 c# md5 解密 c# 文字导航控件 c#注册dll文件 c#安装.net
立即提问