faster rcnn执行代码中的fix layers 是什么意思? 5C

图片说明
如图,restored的不应该是通过预训练后得到的参数吗?为什么冒号后面是0??还有最后的Fix是什么意思?意思是这几层在faster rcnn 训练时参数都不变了吗?

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
faster-rcnn显示 error using fix
恳请各位大神帮忙看看,报错如下, Error using fix Too many input arguments Error in proposal_train(line 86) fix validation data Error in Faster_RCNN_Train.do_proposal_train(line 7) model_stage.output_model_file=proposal_train(conf,dataset,imdb_train,dataset,roidb_train,… Error in script_faster_rcnn__VOC2007_ZF(line 45) model.stae1_rpn =Faster_RCNN_Train.do_proposal_train(conf_proposal,dataset,model,stage1_rpn,opts.do_val);
faster rcnn运行demo出现错误
faster rcnn配置好之后运行 ./tools/demo.py出现如下错误:: Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: Python
faster rcnn的train.prototxt 没有data层
最近在用faster rcnn训练数据,需要修改train.prototxt中data层的参数num_classes,但是train.prototxt没有data层,那我该如何修改呢?
在windows环境下使用自己的数据集跑faster rcnn遇到'NoneType' object is subscriptable问题(tensorflow)
在尝试根据网上教程用自己的数据集跑faster rcnn的时候,会跳出'NoneType' object is not subscriptable的问题,但查了网上说明并没有明白错误的原因......请问要如何解决?谢谢! ![图片说明](https://img-ask.csdn.net/upload/201910/20/1571582480_395221.png)
Faster-RCNN-TensorFlow-Python3-master训练后,如何得到AP,mAP的结果
查了很多资料,tf-faster-rcnn和caffe-faster-rcnn里都是用test__net.py 来评估训练结果。但是我用的是Faster-RCNN-TensorFlow-Python3-master,里面没有test_net.py。那要怎么获得AP和mAP的结果呢?
window faster rcnn error -sed=c99.
F:\Caffe-cp\faster_rcnn\py-faster-rcnn\lib>python setup.py install home = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5 running install running bdist_egg running egg_info writing fast_rcnn.egg-info\PKG-INFO writing top-level names to fast_rcnn.egg-info\top_level.txt writing dependency_links to fast_rcnn.egg-info\dependency_links.txt reading manifest file 'fast_rcnn.egg-info\SOURCES.txt' writing manifest file 'fast_rcnn.egg-info\SOURCES.txt' installing library code to build\bdist.win-amd64\egg running install_lib running build_ext skipping 'utils\bbox.c' Cython extension (up-to-date) skipping 'nms\cpu_nms.c' Cython extension (up-to-date) skipping 'pycocotools\_mask.c' Cython extension (up-to-date) building 'pycocotools._mask' extension C:\Users\Joker\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -IE:\anaconda2\anaconda\lib\site-packages\numpy\core\include -Ipycocotools -IE:\anaconda2\anaconda\include -IE:\anaconda2\anaconda\PC /Tcpycocotools\maskApi.c /Fobuild\temp.win-amd64-2.7\Release\pycocotools\maskApi.obj -std=c99 cl : Command line warning D9002 : ignoring unknown option '-std=c99' maskApi.c f:\caffe-cp\faster_rcnn\py-faster-rcnn\lib\pycocotools\maskApi.h(8) : fatal error C1083: Cannot open include file: 'stdbool.h': No such file or directory error: command 'C:\\Users\\Joker\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\amd64\\cl.exe' failed with exit status 2
ubuntu跑faster_rcnn的demo不出界面
ubuntu跑faster_rcnn的demo不出界面,但是也没有报错,就是不出界面是怎么回事? ubuntu 16 cuda 8.0
faster rcnn训练的时候应该是哪个层出了问题
+ echo Logging output to experiments/logs/faster_rcnn_alt_opt_ZF_.txt.2017-04-19_01-16-47 Logging output to experiments/logs/faster_rcnn_alt_opt_ZF_.txt.2017-04-19_01-16-47 + ./tools/train_faster_rcnn_alt_opt.py --gpu 0 --net_name ZF --weights data/imagenet_models/CaffeNet.v2.caffemodel --imdb voc_2007_trainval --cfg experiments/cfgs/faster_rcnn_alt_opt.yml Called with args: Namespace(cfg_file='experiments/cfgs/faster_rcnn_alt_opt.yml', gpu_id=0, imdb_name='voc_2007_trainval', net_name='ZF', pretrained_model='data/imagenet_models/CaffeNet.v2.caffemodel', set_cfgs=None) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Stage 1 RPN, init from ImageNet model ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Init model: data/imagenet_models/CaffeNet.v2.caffemodel Using config: {'DATA_DIR': 'E:\\caffe-frcnn\\py-faster-rcnn-master\\data', 'DEDUP_BOXES': 0.0625, 'EPS': 1e-14, 'EXP_DIR': 'default', 'GPU_ID': 0, 'MATLAB': 'matlab', 'MODELS_DIR': 'E:\\caffe-frcnn\\py-faster-rcnn-master\\models\\pascal_voc', 'PIXEL_MEANS': array([[[ 102.9801, 115.9465, 122.7717]]]), 'RNG_SEED': 3, 'ROOT_DIR': 'E:\\caffe-frcnn\\py-faster-rcnn-master', 'TEST': {'BBOX_REG': True, 'HAS_RPN': False, 'MAX_SIZE': 1000, 'NMS': 0.3, 'PROPOSAL_METHOD': 'selective_search', 'RPN_MIN_SIZE': 16, 'RPN_NMS_THRESH': 0.7, 'RPN_POST_NMS_TOP_N': 300, 'RPN_PRE_NMS_TOP_N': 6000, 'SCALES': [600], 'SVM': False}, 'TRAIN': {'ASPECT_GROUPING': True, 'BATCH_SIZE': 128, 'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0], 'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0], 'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2], 'BBOX_NORMALIZE_TARGETS': True, 'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': False, 'BBOX_REG': False, 'BBOX_THRESH': 0.5, 'BG_THRESH_HI': 0.5, 'BG_THRESH_LO': 0.1, 'FG_FRACTION': 0.25, 'FG_THRESH': 0.5, 'HAS_RPN': True, 'IMS_PER_BATCH': 1, 'MAX_SIZE': 1000, 'PROPOSAL_METHOD': 'gt', 'RPN_BATCHSIZE': 256, 'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0], 'RPN_CLOBBER_POSITIVES': False, 'RPN_FG_FRACTION': 0.5, 'RPN_MIN_SIZE': 16, 'RPN_NEGATIVE_OVERLAP': 0.3, 'RPN_NMS_THRESH': 0.7, 'RPN_POSITIVE_OVERLAP': 0.7, 'RPN_POSITIVE_WEIGHT': -1.0, 'RPN_POST_NMS_TOP_N': 2000, 'RPN_PRE_NMS_TOP_N': 12000, 'SCALES': [600], 'SNAPSHOT_INFIX': '', 'SNAPSHOT_ITERS': 10000, 'USE_FLIPPED': True, 'USE_PREFETCH': False}, 'USE_GPU_NMS': True} Loaded dataset `voc_2007_trainval` for training Set proposal method: gt Appending horizontally-flipped training examples... voc_2007_trainval gt roidb loaded from E:\caffe-frcnn\py-faster-rcnn-master\data\cache\voc_2007_trainval_gt_roidb.pkl done Preparing training data... done roidb len: 100 Output will be saved to `E:\caffe-frcnn\py-faster-rcnn-master\output\default\voc_2007_trainval` Filtered 0 roidb entries: 100 -> 100 WARNING: Logging before InitGoogleLogging() is written to STDERR I0419 01:16:54.964942 25240 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead. I0419 01:16:55.073168 25240 solver.cpp:44] Initializing solver from parameters: train_net: "models/pascal_voc/ZF/faster_rcnn_alt_opt/stage1_rpn_train.pt" base_lr: 0.001 display: 20 lr_policy: "step" gamma: 0.1 momentum: 0.9 weight_decay: 0.0005 stepsize: 60000 snapshot: 0 snapshot_prefix: "zf_rpn" average_loss: 100 I0419 01:16:55.073168 25240 solver.cpp:77] Creating training net from train_net file: models/pascal_voc/ZF/faster_rcnn_alt_opt/stage1_rpn_train.pt I0419 01:16:55.074168 25240 net.cpp:51] Initializing net from parameters: name: "ZF" state { phase: TRAIN } layer { name: "input-data" type: "Python" top: "data" top: "im_info" top: "gt_boxes" python_param { module: "roi_data_layer.layer" layer: "RoIDataLayer" param_str: "\'num_classes\': 2" } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 96 pad: 3 kernel_size: 7 stride: 2 } } layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "norm1" type: "LRN" bottom: "conv1" top: "norm1" lrn_param { local_size: 3 alpha: 5e-05 beta: 0.75 norm_region: WITHIN_CHANNEL engine: CAFFE } } layer { name: "pool1" type: "Pooling" bottom: "norm1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 pad: 1 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 256 pad: 2 kernel_size: 5 stride: 2 } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "norm2" type: "LRN" bottom: "conv2" top: "norm2" lrn_param { local_size: 3 alpha: 5e-05 beta: 0.75 norm_region: WITHIN_CHANNEL engine: CAFFE } } layer { name: "pool2" type: "Pooling" bottom: "norm2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 pad: 1 } } layer { name: "conv3" type: "Convolution" bottom: "pool2" top: "conv3" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 stride: 1 } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 stride: 1 } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 stride: 1 } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "rpn_conv1" type: "Convolution" bottom: "conv5" top: "rpn_conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 stride: 1 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "rpn_relu1" type: "ReLU" bottom: "rpn_conv1" top: "rpn_conv1" } layer { name: "rpn_cls_score" type: "Convolution" bottom: "rpn_conv1" top: "rpn_cls_score" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 18 pad: 0 kernel_size: 1 stride: 1 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "rpn_bbox_pred" type: "Convolution" bottom: "rpn_conv1"RoiDataLayer: name_to_top: {'gt_boxes': 2, 'data': 0, 'im_info': 1} top: "rpn_bbox_pred" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 36 pad: 0 kernel_size: 1 stride: 1 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "rpn_cls_score_reshape" type: "Reshape" bottom: "rpn_cls_score" top: "rpn_cls_score_reshape" reshape_param { shape { dim: 0 dim: 2 dim: -1 dim: 0 } } } layer { name: "rpn-data" type: "Python" bottom: "rpn_cls_score" bottom: "gt_boxes" bottom: "im_info" bottom: "data" top: "rpn_labels" top: "rpn_bbox_targets" top: "rpn_bbox_inside_weights" top: "rpn_bbox_outside_weights" python_param { module: "rpn.anchor_target_layer" layer: "AnchorTargetLayer" param_str: "\'feat_stride\': 16" } } layer { name: "rpn_loss_cls" type: "SoftmaxWithLoss" bottom: "rpn_cls_score_reshape" bottom: "rpn_labels" top: "rpn_cls_loss" loss_weight: 1 propagate_down: true propagate_down: false loss_param { ignore_label: -1 normalize: true } } layer { name: "rpn_loss_bbox" type: "SmoothL1Loss" bottom: "rpn_bbox_pred" bottom: "rpn_bbox_targets" bottom: "rpn_bbox_inside_weights" bottom: "rpn_bbox_outside_weights" top: "rpn_loss_bbox" loss_weight: 1 smooth_l1_loss_param { sigma: 3 } } layer { name: "dummy_roi_pool_conv5" type: "DummyData" top: "dummy_roi_pool_conv5" dummy_data_param { data_filler { type: "gaussian" std: 0.01 } shape { dim: 1 dim: 9216 } } } layer { name: "fc6" type: "InnerProduct" bottom: "dummy_roi_pool_conv5" top: "fc6" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } inner_product_param { num_output: 4096 } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" param { lr_mult: 0 decay_mult: 0 } param { lr_mult: 0 decay_mult: 0 } inner_product_param { num_output: 4096 } } layer { name: "silence_fc7" type: "Silence" bottom: "fc7" } I0419 01:16:55.074668 25240 layer_factory.cpp:58] Creating layer input-data I0419 01:16:55.109673 25240 net.cpp:84] Creating Layer input-data I0419 01:16:55.109673 25240 net.cpp:380] input-data -> data I0419 01:16:55.109673 25240 net.cpp:380] input-data -> im_info I0419 01:16:55.109673 25240 net.cpp:380] input-data -> gt_boxes I0419 01:16:55.111171 25240 net.cpp:122] Setting up input-data I0419 01:16:55.111171 25240 net.cpp:129] Top shape: 1 3 600 1000 (1800000) I0419 01:16:55.111171 25240 net.cpp:129] Top shape: 1 3 (3) I0419 01:16:55.111668 25240 net.cpp:129] Top shape: 1 4 (4) I0419 01:16:55.111668 25240 net.cpp:137] Memory required for data: 7200028 I0419 01:16:55.111668 25240 layer_factory.cpp:58] Creating layer data_input-data_0_split I0419 01:16:55.111668 25240 net.cpp:84] Creating Layer data_input-data_0_split I0419 01:16:55.111668 25240 net.cpp:406] data_input-data_0_split <- data I0419 01:16:55.111668 25240 net.cpp:380] data_input-data_0_split -> data_input-data_0_split_0 I0419 01:16:55.111668 25240 net.cpp:380] data_input-data_0_split -> data_input-data_0_split_1 I0419 01:16:55.111668 25240 net.cpp:122] Setting up data_input-data_0_split I0419 01:16:55.111668 25240 net.cpp:129] Top shape: 1 3 600 1000 (1800000) I0419 01:16:55.111668 25240 net.cpp:129] Top shape: 1 3 600 1000 (1800000) I0419 01:16:55.111668 25240 net.cpp:137] Memory required for data: 21600028 I0419 01:16:55.111668 25240 layer_factory.cpp:58] Creating layer conv1 I0419 01:16:55.111668 25240 net.cpp:84] Creating Layer conv1 I0419 01:16:55.111668 25240 net.cpp:406] conv1 <- data_input-data_0_split_0 I0419 01:16:55.111668 25240 net.cpp:380] conv1 -> conv1 I0419 01:16:55.577394 25240 net.cpp:122] Setting up conv1 I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 300 500 (14400000) I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 79200028 I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer relu1 I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer relu1 I0419 01:16:55.577394 25240 net.cpp:406] relu1 <- conv1 I0419 01:16:55.577394 25240 net.cpp:367] relu1 -> conv1 (in-place) I0419 01:16:55.577394 25240 net.cpp:122] Setting up relu1 I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 300 500 (14400000) I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 136800028 I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer norm1 I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer norm1 I0419 01:16:55.577394 25240 net.cpp:406] norm1 <- conv1 I0419 01:16:55.577394 25240 net.cpp:380] norm1 -> norm1 I0419 01:16:55.577394 25240 net.cpp:122] Setting up norm1 I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 300 500 (14400000) I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 194400028 I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer pool1 I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer pool1 I0419 01:16:55.577394 25240 net.cpp:406] pool1 <- norm1 I0419 01:16:55.577394 25240 net.cpp:380] pool1 -> pool1 I0419 01:16:55.577394 25240 net.cpp:122] Setting up pool1 I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 151 251 (3638496) I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 208954012 I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer conv2 I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer conv2 I0419 01:16:55.577394 25240 net.cpp:406] conv2 <- pool1 I0419 01:16:55.577394 25240 net.cpp:380] conv2 -> conv2 I0419 01:16:55.593016 25240 net.cpp:122] Setting up conv2 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 76 126 (2451456) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 218759836 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer relu2 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer relu2 I0419 01:16:55.593016 25240 net.cpp:406] relu2 <- conv2 I0419 01:16:55.593016 25240 net.cpp:367] relu2 -> conv2 (in-place) I0419 01:16:55.593016 25240 net.cpp:122] Setting up relu2 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 76 126 (2451456) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 228565660 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer norm2 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer norm2 I0419 01:16:55.593016 25240 net.cpp:406] norm2 <- conv2 I0419 01:16:55.593016 25240 net.cpp:380] norm2 -> norm2 I0419 01:16:55.593016 25240 net.cpp:122] Setting up norm2 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 76 126 (2451456) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 238371484 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer pool2 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer pool2 I0419 01:16:55.593016 25240 net.cpp:406] pool2 <- norm2 I0419 01:16:55.593016 25240 net.cpp:380] pool2 -> pool2 I0419 01:16:55.593016 25240 net.cpp:122] Setting up pool2 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 240927388 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer conv3 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer conv3 I0419 01:16:55.593016 25240 net.cpp:406] conv3 <- pool2 I0419 01:16:55.593016 25240 net.cpp:380] conv3 -> conv3 I0419 01:16:55.593016 25240 net.cpp:122] Setting up conv3 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 244761244 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer relu3 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer relu3 I0419 01:16:55.593016 25240 net.cpp:406] relu3 <- conv3 I0419 01:16:55.593016 25240 net.cpp:367] relu3 -> conv3 (in-place) I0419 01:16:55.593016 25240 net.cpp:122] Setting up relu3 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 248595100 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer conv4 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer conv4 I0419 01:16:55.593016 25240 net.cpp:406] conv4 <- conv3 I0419 01:16:55.593016 25240 net.cpp:380] conv4 -> conv4 I0419 01:16:55.593016 25240 net.cpp:122] Setting up conv4 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 252428956 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer relu4 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer relu4 I0419 01:16:55.593016 25240 net.cpp:406] relu4 <- conv4 I0419 01:16:55.593016 25240 net.cpp:367] relu4 -> conv4 (in-place) I0419 01:16:55.593016 25240 net.cpp:122] Setting up relu4 I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464) I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 256262812 I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer conv5 I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer conv5 I0419 01:16:55.593016 25240 net.cpp:406] conv5 <- conv4 I0419 01:16:55.593016 25240 net.cpp:380] conv5 -> conv5 I0419 01:16:55.608644 25240 net.cpp:122] Setting up conv5 I0419 01:16:55.608644 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.608644 25240 net.cpp:137] Memory required for data: 258818716 I0419 01:16:55.608644 25240 layer_factory.cpp:58] Creating layer relu5 I0419 01:16:55.608644 25240 net.cpp:84] Creating Layer relu5 I0419 01:16:55.608644 25240 net.cpp:406] relu5 <- conv5 I0419 01:16:55.608644 25240 net.cpp:367] relu5 -> conv5 (in-place) I0419 01:16:55.608644 25240 net.cpp:122] Setting up relu5 I0419 01:16:55.608644 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.608644 25240 net.cpp:137] Memory required for data: 261374620 I0419 01:16:55.608644 25240 layer_factory.cpp:58] Creating layer rpn_conv1 I0419 01:16:55.608644 25240 net.cpp:84] Creating Layer rpn_conv1 I0419 01:16:55.608644 25240 net.cpp:406] rpn_conv1 <- conv5 I0419 01:16:55.608644 25240 net.cpp:380] rpn_conv1 -> rpn_conv1 I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_conv1 I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 263930524 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_relu1 I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_relu1 I0419 01:16:55.624267 25240 net.cpp:406] rpn_relu1 <- rpn_conv1 I0419 01:16:55.624267 25240 net.cpp:367] rpn_relu1 -> rpn_conv1 (in-place) I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_relu1 I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 266486428 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_conv1_rpn_relu1_0_split I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_conv1_rpn_relu1_0_split I0419 01:16:55.624267 25240 net.cpp:406] rpn_conv1_rpn_relu1_0_split <- rpn_conv1 I0419 01:16:55.624267 25240 net.cpp:380] rpn_conv1_rpn_relu1_0_split -> rpn_conv1_rpn_relu1_0_split_0 I0419 01:16:55.624267 25240 net.cpp:380] rpn_conv1_rpn_relu1_0_split -> rpn_conv1_rpn_relu1_0_split_1 I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_conv1_rpn_relu1_0_split I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 271598236 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_cls_score I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_cls_score I0419 01:16:55.624267 25240 net.cpp:406] rpn_cls_score <- rpn_conv1_rpn_relu1_0_split_0 I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score -> rpn_cls_score I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_cls_score I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 18 39 64 (44928) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 271777948 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_cls_score_rpn_cls_score_0_split I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_cls_score_rpn_cls_score_0_split I0419 01:16:55.624267 25240 net.cpp:406] rpn_cls_score_rpn_cls_score_0_split <- rpn_cls_score I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score_rpn_cls_score_0_split -> rpn_cls_score_rpn_cls_score_0_split_0 I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score_rpn_cls_score_0_split -> rpn_cls_score_rpn_cls_score_0_split_1 I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_cls_score_rpn_cls_score_0_split I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 18 39 64 (44928) I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 18 39 64 (44928) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 272137372 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_bbox_pred I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_bbox_pred I0419 01:16:55.624267 25240 net.cpp:406] rpn_bbox_pred <- rpn_conv1_rpn_relu1_0_split_1 I0419 01:16:55.624267 25240 net.cpp:380] rpn_bbox_pred -> rpn_bbox_pred I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_bbox_pred I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 36 39 64 (89856) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 272496796 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_cls_score_reshape I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_cls_score_reshape I0419 01:16:55.624267 25240 net.cpp:406] rpn_cls_score_reshape <- rpn_cls_score_rpn_cls_score_0_split_0 I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score_reshape -> rpn_cls_score_reshape I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_cls_score_reshape I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 2 351 64 (44928) I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 272676508 I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn-data I0419 01:16:55.639891 25240 net.cpp:84] Creating Layer rpn-data I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- rpn_cls_score_rpn_cls_score_0_split_1 I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- gt_boxes I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- im_info I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- data_input-data_0_split_1 I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_labels I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_bbox_targets I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_bbox_inside_weights I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_bbox_outside_weights I0419 01:16:55.639891 25240 net.cpp:122] Setting up rpn-data I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 1 351 64 (22464) I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 36 39 64 (89856) I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 36 39 64 (89856) I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 36 39 64 (89856) I0419 01:16:55.639891 25240 net.cpp:137] Memory required for data: 273844636 I0419 01:16:55.639891 25240 layer_factory.cpp:58] Creating layer rpn_loss_cls I0419 01:16:55.639891 25240 net.cpp:84] Creating Layer rpn_loss_cls I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_cls <- rpn_cls_score_reshape I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_cls <- rpn_labels I0419 01:16:55.639891 25240 net.cpp:380] rpn_loss_cls -> rpn_cls_loss I0419 01:16:55.639891 25240 layer_factory.cpp:58] Creating layer rpn_loss_cls I0419 01:16:55.639891 25240 net.cpp:122] Setting up rpn_loss_cls I0419 01:16:55.639891 25240 net.cpp:129] Top shape: (1) I0419 01:16:55.639891 25240 net.cpp:132] with loss weight 1 I0419 01:16:55.639891 25240 net.cpp:137] Memory required for data: 273844640 I0419 01:16:55.639891 25240 layer_factory.cpp:58] Creating layer rpn_loss_bbox I0419 01:16:55.639891 25240 net.cpp:84] Creating Layer rpn_loss_bbox I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_bbox <- rpn_bbox_pred I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_bbox <- rpn_bbox_targets I0419 01:16:55.639891 2*** Check failure stack trace: ***
在win10系统中使用python运行faster-rcnn训练自己的数据集出现问题
在win10系统中使用python运行faster-rcnn训练自己的数据集出现以下问题: I0417 16:38:45.682274 7396 layer_factory.hpp:77] Creating layer rpn_cls_score_rpn_cls_score*** Check failure stack trace: *** 请问该问题产生的原因有可能是哪些?应该怎么解决?
faster-rcnn 关于预训练的问题
faster -rcnn 训练分成两步: 1. per-train 采用Image Net的数据集(1000类,一千万张图片) 2. fine-tuning 采用pascal voc _ 2007 (20类,一万张图片) 或其他数据集 问题: 如果我想训练另外一个数据集,例如做细胞检测,大概有三类,可不可以直接用第一步per-train的模型来进行初始化参数?如果不可以,大概需要多少张细胞图像做预训练?
faster rcnn训练自己的数据集 测试结果一片红色
搞了好几天才训练出来的网络,测试时输入是黑白图片,结果是一片红色,但是标注的位置大致是对的,请问这是怎么回事?是图片的问题吗?
用faster rcnn来训练自己的数据集,在进行配置接口时出现如下错误
![图片说明](https://img-ask.csdn.net/upload/201704/16/1492329858_521128.png) 使用的是pascal 格式的数据集。 who can tell me !这个问题真是够折磨的
faster-RCNN 分类层
请问RPN分类层cls_score为什么要输出两个参数:每个anchor的前景概率和背景概率?背景概率不就等于(1-前景概率)吗?
Network 是怎么编写的
Problem Description The ALPC company is now working on his own network system, which is connecting all N ALPC department. To economize on spending, the backbone network has only one router for each department, and N-1 optical fiber in total to connect all routers. The usual way to measure connecting speed is lag, or network latency, referring the time taken for a sent packet of data to be received at the other end. Now the network is on trial, and new photonic crystal fibers designed by ALPC42 is trying out, the lag on fibers can be ignored. That means, lag happened when message transport through the router. ALPC42 is trying to change routers to make the network faster, now he want to know that, which router, in any exactly time, between any pair of nodes, the K-th high latency is. He needs your help. Input There are only one test case in input file. Your program is able to get the information of N routers and N-1 fiber connections from input, and Q questions for two condition: 1. For some reason, the latency of one router changed. 2. Querying the K-th longest lag router between two routers. For each data case, two integers N and Q for first line. 0<=N<=80000, 0<=Q<=30000. Then n integers in second line refer to the latency of each router in the very beginning. Then N-1 lines followed, contains two integers x and y for each, telling there is a fiber connect router x and router y. Then q lines followed to describe questions, three numbers k, a, b for each line. If k=0, Telling the latency of router a, Ta changed to b; if k>0, asking the latency of the k-th longest lag router between a and b (include router a and b). 0<=b<100000000. A blank line follows after each case. Output For each question k>0, print a line to answer the latency time. Once there are less than k routers in the way, print "invalid request!" instead. Sample Input 5 5 5 1 2 3 4 3 1 2 1 4 3 5 3 2 4 5 0 1 2 2 2 3 2 1 4 3 3 5 Sample Output 3 2 2 invalid request!
Moving Points 移动点的问题
Problem Description Consider a number of Target points in a plane. Each Target point moves in a straight line at a constant speed, and do not change direction. Now, consider a Chaser point that starts at the origin, and moves at a speed faster than any of the Target points. The Chaser point moves at a constant speed, but it is capable of changing direction at will. It will ‘Catch’ a Target point, and then move from there to catch another Target point, and so on. Given the parameters of the Chaser point and the Target points, what is the least amount of time it takes the Chaser point to catch all of the Target points? ‘Catch’ simply means that the Catcher and the Target occupy the same point in the plane at the same time. This can be instantaneous; there’s no need for the Catcher to stay with the Target for any non-zero length of time. Input There will be several test cases in the input. Each test case will begin with two integers N C Where N (1 ≤ N ≤ 15) is the number of Target points, and C (0 < C ≤ 1,000) is the speed of the Chaser point. Each of the next N lines will have four integers, describing a Target point: X Y D S Where (X,Y) is the location in the plane (-1000 ≤ X,Y ≤ 1,000) of that Target point at time 0, D (0 ≤ D < 360) is the direction of movement in Degrees (0 degrees is the positive X axis, 90 degrees is the positive Y axis), and S (0 ≤ S < C) is the speed of that Target point. It is assumed that all Target points start moving immediately at time 0. The input will end with a line with two 0s. Output For each test case, output a single real number on its own line, representing the least amount of time needed for the Chaser point to catch all of the Target points. Print this number to exactly 2 decimal places, rounded. Output no extra spaces, and do not separate answers with blank lines. Sample Input 2 25 19 19 32 10 6 45 133 19 5 10 10 20 45 3 30 10 135 4 100 100 219 5 10 100 301 4 30 30 5 3 0 0 Sample Output 12.62 12.54
slim微调后的模型可以用在tf-faster rcnn上进行细粒度测试吗?
这是用在tf-faster rcnn上的错误 ``` Traceback (most recent call last): File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call return fn(*args) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.NotFoundError: Key resnet_v1_101/bbox_pred/biases not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "../tools/demo.py", line 189, in <module> print(saver.restore(sess,tfmodel)) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1768, in restore six.reraise(exception_type, exception_value, exception_traceback) File "/home/lf/anaconda3/lib/python3.6/site-packages/six.py", line 693, in reraise raise value File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1752, in restore {self.saver_def.filename_tensor_name: save_path}) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run run_metadata_ptr) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run run_metadata) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.NotFoundError: Key resnet_v1_101/bbox_pred/biases not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]] Caused by op 'save/RestoreV2', defined at: File "../tools/demo.py", line 187, in <module> saver = tf.train.Saver() File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1284, in __init__ self.build() File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1296, in build self._build(self._filename, build_save=True, build_restore=True) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1333, in _build build_save=build_save, build_restore=build_restore) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 781, in _build_internal restore_sequentially, reshape) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 400, in _AddRestoreOps restore_sequentially) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 832, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1463, in restore_v2 shape_and_slices=shape_and_slices, dtypes=dtypes, name=name) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op op_def=op_def) File "/home/lf/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1740, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access NotFoundError (see above for traceback): Key resnet_v1_101/bbox_pred/biases not found in checkpoint [[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]] ```
The Bridges of San Mochti 代码怎么实现的呢
Problem Description You work at a military training facility in the jungles of San Motchi. One of the training exercises is to cross a series of rope bridges set high in the trees. Every bridge has a maximum capacity, which is the number of people that the bridge can support without breaking. The goal is to cross the bridges as quickly as possible, subject to the following tactical requirements: One unit at a time! If two or more people can cross a bridge at the same time (because they do not exceed the capacity), they do so as a unit; they walk as close together as possible, and they all take a step at the same time. It is never acceptable to have two different units on the same bridge at the same time, even if they don't exceed the capacity. Having multiple units on a bridge is not tactically sound, and multiple units can cause oscillations in the rope that slow everyone down. This rule applies even if a unit contains only a single person. Keep moving! When a bridge is free, as many people as possible begin to cross it as a unit. Note that this strategy doesn't always lead to an optimal overall crossing time (it may be faster for a group to wait for people behind them to catch up so that more people can cross at once). But it is not tactically sound for a group to wait, because the people they're waiting for might not make it, and then they've not only wasted time but endangered themselves as well. Periodically the bridges are reconfigured to give the trainees a different challenge. Given a bridge configuration, your job is to calculate the minimum amount of time it would take a group of people to cross all the bridges subject to these requirements. For example, suppose you have nine people who must cross two bridges: the first has capacity 3 and takes 10 seconds to cross; the second has capacity 4 and takes 60 seconds to cross. The initial state can be represented as (9 0 0), meaning that 9 people are waiting to cross the first bridge, no one is waiting to cross the second bridge, and no one has crossed the last bridge. At 10 seconds the state is (6 3 0). At 20 seconds the state is (3 3 /3:50/ 0), where /3:50/ means that a unit of three people is crossing the second bridge and has 50 seconds left. At 30 seconds the state is (0 6 /3:40/ 0); at 70 seconds it's (0 6 3); at 130 seconds it's (0 2 7); and at 190 seconds it's (0 0 9). Thus the total minimum time is 190 seconds. Input The input consists of one or more bridge configurations, followed by a line containing two zeros that signals the end of the input. Each bridge configuration begins with a line containing a negative integer –B and a positive integer P, where B is the number of bridges and P is the total number of people that must cross the bridges. Both B and P will be at most 20. (The reason for putting –B in the input file is to make the first line of a configuration stand out from the remaining lines.) Following are B lines, one for each bridge, listed in order from the first bridge that must be crossed to the last. Each bridge is defined by two positive integers C and T, where C is the capacity of the bridge (the maximum number of people the bridge can hold), and T is the time it takes to cross the bridge (in seconds). C will be at most 5, and T will be at most 100. Only one unit, of size at most C, can cross a bridge at a time; the time required is always T, regardless of the size of the unit (since they all move as one). The end of one bridge is always close to the beginning of the next, so the travel time between bridges is zero. Output For each bridge configuration, output one line containing the minimum amount of time it will take (in seconds) for all of the people to cross all of the bridges while meeting both tactical requirements. Sample Input -1 2 5 17 -1 8 3 25 -2 9 3 10 4 60 -3 10 2 10 3 30 2 15 -4 8 1 8 4 30 2 10 1 12 0 0 Sample Output 17 75 190 145 162
faster-rcnn的bounding boxes是否可以改进啊
传统的bounding boxes是水平的,也就是正方形,(x y w h)怎么做成有方向的oriented bounding boxes (x1 y1 x2 y2 x3 y3 x4 y4)。 或者有没有这种有方向的bounding boxes目标检测算法,求大佬解答 类似下图 ![图片说明](https://img-ask.csdn.net/upload/201803/10/1520641104_790494.jpg)
Scrolling Sign 是怎么做的呢
Problem Description Electric scrolling signs are often used for advertising. A given sign displays exactly k characters. When the sign is switched on, all of the character positions are initially empty (showing spaces). In each time interval, all of the characters on the sign are shifted to the left by one position, and a new character is added at the right-most position. The character that was in the left-most position moves off the sign. For certain sequences of words, it is possible to reuse characters from one word to form a subsequent word. For example, on a sign with three character positions, the sign can display the message CAT ATE TED by scrolling in the five characters CATED. The advertiser has a specific message to show using the sign. The faster the message is displayed, the more people will be able to see the whole message. Therefore, your job is to find a way to display all the words of the message by scrolling in the smallest number of letters. In between showing the words of the message, the sign may display other words that are not considered part of the message. However, the words of the message must be shown in the order in which they are given. Input The first line of test chunk contains a single integer n, the number of test cases in this chunk to follow. Each test case starts with a line containing a two integers, k, the number of character positions on the sign, and w, the number of words in the message. Each of the two integers is between 1 and 100, inclusive. The following w lines each contain a word of the message comprising exactly k uppercase letters. Please process to the end of the data file. Output For each test case, output a line containing a single integer, the minimum number of letters that must be scrolled into the sign so that it displays all the words of the message. Sample Input 2 3 2 CAT TED 3 3 CAT ATE TEA 2 3 2 CAT TED 3 3 CAT ATE TEA Sample Output 5 5 5 5
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载 点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。 ...
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它是一个过程,是一个不断累积、不断沉淀、不断总结、善于传达自己的个人见解以及乐于分享的过程。
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 free -m 其中:m表示兆,也可以用g,注意都要小写 Men:表示物理内存统计 total:表示物理内存总数(total=used+free) use...
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发...
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 演示地点演示 html代码如下` music 这个年纪 七月的风 音乐 ` 然后就是css`*{ margin: 0; padding: 0; text-decoration: none; list-...
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。
数据库优化 - SQL优化
以实际SQL入手,带你一步一步走上SQL优化之路!
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 cpp 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7 p...
通俗易懂地给女朋友讲:线程池的内部原理
餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹...
面试官:你连RESTful都不知道我怎么敢要你?
干货,2019 RESTful最贱实践
JDK12 Collectors.teeing 你真的需要了解一下
前言 在 Java 12 里面有个非常好用但在官方 JEP 没有公布的功能,因为它只是 Collector 中的一个小改动,它的作用是 merge 两个 collector 的结果,这句话显得很抽象,老规矩,我们先来看个图(这真是一个不和谐的图????): 管道改造经常会用这个小东西,通常我们叫它「三通」,它的主要作用就是将 downstream1 和 downstre...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看...
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
【图解经典算法题】如何用一行代码解决约瑟夫环问题
约瑟夫环问题算是很经典的题了,估计大家都听说过,然后我就在一次笔试中遇到了,下面我就用 3 种方法来详细讲解一下这道题,最后一种方法学了之后保证让你可以让你装逼。 问题描述:编号为 1-N 的 N 个士兵围坐在一起形成一个圆圈,从编号为 1 的士兵开始依次报数(1,2,3…这样依次报),数到 m 的 士兵会被杀死出列,之后的士兵再从 1 开始报数。直到最后剩下一士兵,求这个士兵的编号。 1、方...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
GitHub标星近1万:只需5秒音源,这个网络就能实时“克隆”你的声音
作者 | Google团队 译者 | 凯隐 编辑 | Jane 出品 | AI科技大本营(ID:rgznai100) 本文中,Google 团队提出了一种文本语音合成(text to speech)神经系统,能通过少量样本学习到多个不同说话者(speaker)的语音特征,并合成他们的讲话音频。此外,对于训练时网络没有接触过的说话者,也能在不重新训练的情况下,仅通过未知...
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员...
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC...
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下...
【管理系统课程设计】美少女手把手教你后台管理
【文章后台管理系统】URL设计与建模分析+项目源码+运行界面 栏目管理、文章列表、用户管理、角色管理、权限管理模块(文章最后附有源码) 1. 这是一个什么系统? 1.1 学习后台管理系统的原因 随着时代的变迁,现如今各大云服务平台横空出世,市面上有许多如学生信息系统、图书阅读系统、停车场管理系统等的管理系统,而本人家里就有人在用烟草销售系统,直接在网上完成挑选、购买与提交收货点,方便又快捷。 试想,若没有烟草销售系统,本人家人想要购买烟草,还要独自前往药...
4G EPS 第四代移动通信系统
目录 文章目录目录4G 与 LTE/EPCLTE/EPC 的架构E-UTRANE-UTRAN 协议栈eNodeBEPCMMES-GWP-GWHSSLTE/EPC 协议栈概览 4G 与 LTE/EPC 4G,即第四代移动通信系统,提供了 3G 不能满足的无线网络宽带化,主要提供数据(上网)业务。而 LTE(Long Term Evolution,长期演进技术)是电信领域用于手机及数据终端的高速无线通...
相关热词 c# 输入ip c# 乱码 报表 c#选择结构应用基本算法 c# 收到udp包后回包 c#oracle 头文件 c# 序列化对象 自定义 c# tcp 心跳 c# ice连接服务端 c# md5 解密 c# 文字导航控件
立即提问