faster rcnn训练的时候应该是哪个层出了问题
  • echo Logging output to experiments/logs/faster_rcnn_alt_opt_ZF_.txt.2017-04-19_01-16-47 Logging output to experiments/logs/faster_rcnn_alt_opt_ZF_.txt.2017-04-19_01-16-47
  • ./tools/train_faster_rcnn_alt_opt.py --gpu 0 --net_name ZF --weights data/imagenet_models/CaffeNet.v2.caffemodel --imdb voc_2007_trainval --cfg experiments/cfgs/faster_rcnn_alt_opt.yml
    Called with args:
    Namespace(cfg_file='experiments/cfgs/faster_rcnn_alt_opt.yml', gpu_id=0, imdb_name='voc_2007_trainval', net_name='ZF', pretrained_model='data/imagenet_models/CaffeNet.v2.caffemodel', set_cfgs=None)

    Stage 1 RPN, init from ImageNet model
    

    Init model: data/imagenet_models/CaffeNet.v2.caffemodel
    Using config:
    {'DATA_DIR': 'E:\caffe-frcnn\py-faster-rcnn-master\data',
    'DEDUP_BOXES': 0.0625,
    'EPS': 1e-14,
    'EXP_DIR': 'default',
    'GPU_ID': 0,
    'MATLAB': 'matlab',
    'MODELS_DIR': 'E:\caffe-frcnn\py-faster-rcnn-master\models\pascal_voc',
    'PIXEL_MEANS': array([[[ 102.9801, 115.9465, 122.7717]]]),
    'RNG_SEED': 3,
    'ROOT_DIR': 'E:\caffe-frcnn\py-faster-rcnn-master',
    'TEST': {'BBOX_REG': True,
    'HAS_RPN': False,
    'MAX_SIZE': 1000,
    'NMS': 0.3,
    'PROPOSAL_METHOD': 'selective_search',
    'RPN_MIN_SIZE': 16,
    'RPN_NMS_THRESH': 0.7,
    'RPN_POST_NMS_TOP_N': 300,
    'RPN_PRE_NMS_TOP_N': 6000,
    'SCALES': [600],
    'SVM': False},
    'TRAIN': {'ASPECT_GROUPING': True,
    'BATCH_SIZE': 128,
    'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
    'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],
    'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],
    'BBOX_NORMALIZE_TARGETS': True,
    'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': False,
    'BBOX_REG': False,
    'BBOX_THRESH': 0.5,
    'BG_THRESH_HI': 0.5,
    'BG_THRESH_LO': 0.1,
    'FG_FRACTION': 0.25,
    'FG_THRESH': 0.5,
    'HAS_RPN': True,
    'IMS_PER_BATCH': 1,
    'MAX_SIZE': 1000,
    'PROPOSAL_METHOD': 'gt',
    'RPN_BATCHSIZE': 256,
    'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
    'RPN_CLOBBER_POSITIVES': False,
    'RPN_FG_FRACTION': 0.5,
    'RPN_MIN_SIZE': 16,
    'RPN_NEGATIVE_OVERLAP': 0.3,
    'RPN_NMS_THRESH': 0.7,
    'RPN_POSITIVE_OVERLAP': 0.7,
    'RPN_POSITIVE_WEIGHT': -1.0,
    'RPN_POST_NMS_TOP_N': 2000,
    'RPN_PRE_NMS_TOP_N': 12000,
    'SCALES': [600],
    'SNAPSHOT_INFIX': '',
    'SNAPSHOT_ITERS': 10000,
    'USE_FLIPPED': True,
    'USE_PREFETCH': False},
    'USE_GPU_NMS': True}
    Loaded dataset voc_2007_trainval for training
    Set proposal method: gt
    Appending horizontally-flipped training examples...
    voc_2007_trainval gt roidb loaded from E:\caffe-frcnn\py-faster-rcnn-master\data\cache\voc_2007_trainval_gt_roidb.pkl
    done
    Preparing training data...
    done
    roidb len: 100
    Output will be saved to E:\caffe-frcnn\py-faster-rcnn-master\output\default\voc_2007_trainval
    Filtered 0 roidb entries: 100 -> 100
    WARNING: Logging before InitGoogleLogging() is written to STDERR
    I0419 01:16:54.964942 25240 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
    I0419 01:16:55.073168 25240 solver.cpp:44] Initializing solver from parameters:
    train_net: "models/pascal_voc/ZF/faster_rcnn_alt_opt/stage1_rpn_train.pt"
    base_lr: 0.001
    display: 20
    lr_policy: "step"
    gamma: 0.1
    momentum: 0.9
    weight_decay: 0.0005
    stepsize: 60000
    snapshot: 0
    snapshot_prefix: "zf_rpn"
    average_loss: 100
    I0419 01:16:55.073168 25240 solver.cpp:77] Creating training net from train_net file: models/pascal_voc/ZF/faster_rcnn_alt_opt/stage1_rpn_train.pt
    I0419 01:16:55.074168 25240 net.cpp:51] Initializing net from parameters:
    name: "ZF"
    state {
    phase: TRAIN
    }
    layer {
    name: "input-data"
    type: "Python"
    top: "data"
    top: "im_info"
    top: "gt_boxes"
    python_param {
    module: "roi_data_layer.layer"
    layer: "RoIDataLayer"
    param_str: "\'num_classes\': 2"
    }
    }
    layer {
    name: "conv1"
    type: "Convolution"
    bottom: "data"
    top: "conv1"
    param {
    lr_mult: 1
    }
    param {
    lr_mult: 2
    }
    convolution_param {
    num_output: 96
    pad: 3
    kernel_size: 7
    stride: 2
    }
    }
    layer {
    name: "relu1"
    type: "ReLU"
    bottom: "conv1"
    top: "conv1"
    }
    layer {
    name: "norm1"
    type: "LRN"
    bottom: "conv1"
    top: "norm1"
    lrn_param {
    local_size: 3
    alpha: 5e-05
    beta: 0.75
    norm_region: WITHIN_CHANNEL
    engine: CAFFE
    }
    }
    layer {
    name: "pool1"
    type: "Pooling"
    bottom: "norm1"
    top: "pool1"
    pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
    pad: 1
    }
    }
    layer {
    name: "conv2"
    type: "Convolution"
    bottom: "pool1"
    top: "conv2"
    param {
    lr_mult: 1
    }
    param {
    lr_mult: 2
    }
    convolution_param {
    num_output: 256
    pad: 2
    kernel_size: 5
    stride: 2
    }
    }
    layer {
    name: "relu2"
    type: "ReLU"
    bottom: "conv2"
    top: "conv2"
    }
    layer {
    name: "norm2"
    type: "LRN"
    bottom: "conv2"
    top: "norm2"
    lrn_param {
    local_size: 3
    alpha: 5e-05
    beta: 0.75
    norm_region: WITHIN_CHANNEL
    engine: CAFFE
    }
    }
    layer {
    name: "pool2"
    type: "Pooling"
    bottom: "norm2"
    top: "pool2"
    pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
    pad: 1
    }
    }
    layer {
    name: "conv3"
    type: "Convolution"
    bottom: "pool2"
    top: "conv3"
    param {
    lr_mult: 1
    }
    param {
    lr_mult: 2
    }
    convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
    stride: 1
    }
    }
    layer {
    name: "relu3"
    type: "ReLU"
    bottom: "conv3"
    top: "conv3"
    }
    layer {
    name: "conv4"
    type: "Convolution"
    bottom: "conv3"
    top: "conv4"
    param {
    lr_mult: 1
    }
    param {
    lr_mult: 2
    }
    convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
    stride: 1
    }
    }
    layer {
    name: "relu4"
    type: "ReLU"
    bottom: "conv4"
    top: "conv4"
    }
    layer {
    name: "conv5"
    type: "Convolution"
    bottom: "conv4"
    top: "conv5"
    param {
    lr_mult: 1
    }
    param {
    lr_mult: 2
    }
    convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    stride: 1
    }
    }
    layer {
    name: "relu5"
    type: "ReLU"
    bottom: "conv5"
    top: "conv5"
    }
    layer {
    name: "rpn_conv1"
    type: "Convolution"
    bottom: "conv5"
    top: "rpn_conv1"
    param {
    lr_mult: 1
    }
    param {
    lr_mult: 2
    }
    convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
    type: "gaussian"
    std: 0.01
    }
    bias_filler {
    type: "constant"
    value: 0
    }
    }
    }
    layer {
    name: "rpn_relu1"
    type: "ReLU"
    bottom: "rpn_conv1"
    top: "rpn_conv1"
    }
    layer {
    name: "rpn_cls_score"
    type: "Convolution"
    bottom: "rpn_conv1"
    top: "rpn_cls_score"
    param {
    lr_mult: 1
    }
    param {
    lr_mult: 2
    }
    convolution_param {
    num_output: 18
    pad: 0
    kernel_size: 1
    stride: 1
    weight_filler {
    type: "gaussian"
    std: 0.01
    }
    bias_filler {
    type: "constant"
    value: 0
    }
    }
    }
    layer {
    name: "rpn_bbox_pred"
    type: "Convolution"
    bottom: "rpn_conv1"RoiDataLayer: name_to_top: {'gt_boxes': 2, 'data': 0, 'im_info': 1}

    top: "rpn_bbox_pred"
    param {
    lr_mult: 1
    }
    param {
    lr_mult: 2
    }
    convolution_param {
    num_output: 36
    pad: 0
    kernel_size: 1
    stride: 1
    weight_filler {
    type: "gaussian"
    std: 0.01
    }
    bias_filler {
    type: "constant"
    value: 0
    }
    }
    }
    layer {
    name: "rpn_cls_score_reshape"
    type: "Reshape"
    bottom: "rpn_cls_score"
    top: "rpn_cls_score_reshape"
    reshape_param {
    shape {
    dim: 0
    dim: 2
    dim: -1
    dim: 0
    }
    }
    }
    layer {
    name: "rpn-data"
    type: "Python"
    bottom: "rpn_cls_score"
    bottom: "gt_boxes"
    bottom: "im_info"
    bottom: "data"
    top: "rpn_labels"
    top: "rpn_bbox_targets"
    top: "rpn_bbox_inside_weights"
    top: "rpn_bbox_outside_weights"
    python_param {
    module: "rpn.anchor_target_layer"
    layer: "AnchorTargetLayer"
    param_str: "\'feat_stride\': 16"
    }
    }
    layer {
    name: "rpn_loss_cls"
    type: "SoftmaxWithLoss"
    bottom: "rpn_cls_score_reshape"
    bottom: "rpn_labels"
    top: "rpn_cls_loss"
    loss_weight: 1
    propagate_down: true
    propagate_down: false
    loss_param {
    ignore_label: -1
    normalize: true
    }
    }
    layer {
    name: "rpn_loss_bbox"
    type: "SmoothL1Loss"
    bottom: "rpn_bbox_pred"
    bottom: "rpn_bbox_targets"
    bottom: "rpn_bbox_inside_weights"
    bottom: "rpn_bbox_outside_weights"
    top: "rpn_loss_bbox"
    loss_weight: 1
    smooth_l1_loss_param {
    sigma: 3
    }
    }
    layer {
    name: "dummy_roi_pool_conv5"
    type: "DummyData"
    top: "dummy_roi_pool_conv5"
    dummy_data_param {
    data_filler {
    type: "gaussian"
    std: 0.01
    }
    shape {
    dim: 1
    dim: 9216
    }
    }
    }
    layer {
    name: "fc6"
    type: "InnerProduct"
    bottom: "dummy_roi_pool_conv5"
    top: "fc6"
    param {
    lr_mult: 0
    decay_mult: 0
    }
    param {
    lr_mult: 0
    decay_mult: 0
    }
    inner_product_param {
    num_output: 4096
    }
    }
    layer {
    name: "relu6"
    type: "ReLU"
    bottom: "fc6"
    top: "fc6"
    }
    layer {
    name: "fc7"
    type: "InnerProduct"
    bottom: "fc6"
    top: "fc7"
    param {
    lr_mult: 0
    decay_mult: 0
    }
    param {
    lr_mult: 0
    decay_mult: 0
    }
    inner_product_param {
    num_output: 4096
    }
    }
    layer {
    name: "silence_fc7"
    type: "Silence"
    bottom: "fc7"
    }
    I0419 01:16:55.074668 25240 layer_factory.cpp:58] Creating layer input-data
    I0419 01:16:55.109673 25240 net.cpp:84] Creating Layer input-data
    I0419 01:16:55.109673 25240 net.cpp:380] input-data -> data
    I0419 01:16:55.109673 25240 net.cpp:380] input-data -> im_info
    I0419 01:16:55.109673 25240 net.cpp:380] input-data -> gt_boxes
    I0419 01:16:55.111171 25240 net.cpp:122] Setting up input-data
    I0419 01:16:55.111171 25240 net.cpp:129] Top shape: 1 3 600 1000 (1800000)
    I0419 01:16:55.111171 25240 net.cpp:129] Top shape: 1 3 (3)
    I0419 01:16:55.111668 25240 net.cpp:129] Top shape: 1 4 (4)
    I0419 01:16:55.111668 25240 net.cpp:137] Memory required for data: 7200028
    I0419 01:16:55.111668 25240 layer_factory.cpp:58] Creating layer data_input-data_0_split
    I0419 01:16:55.111668 25240 net.cpp:84] Creating Layer data_input-data_0_split
    I0419 01:16:55.111668 25240 net.cpp:406] data_input-data_0_split <- data
    I0419 01:16:55.111668 25240 net.cpp:380] data_input-data_0_split -> data_input-data_0_split_0
    I0419 01:16:55.111668 25240 net.cpp:380] data_input-data_0_split -> data_input-data_0_split_1
    I0419 01:16:55.111668 25240 net.cpp:122] Setting up data_input-data_0_split
    I0419 01:16:55.111668 25240 net.cpp:129] Top shape: 1 3 600 1000 (1800000)
    I0419 01:16:55.111668 25240 net.cpp:129] Top shape: 1 3 600 1000 (1800000)
    I0419 01:16:55.111668 25240 net.cpp:137] Memory required for data: 21600028
    I0419 01:16:55.111668 25240 layer_factory.cpp:58] Creating layer conv1
    I0419 01:16:55.111668 25240 net.cpp:84] Creating Layer conv1
    I0419 01:16:55.111668 25240 net.cpp:406] conv1 <- data_input-data_0_split_0
    I0419 01:16:55.111668 25240 net.cpp:380] conv1 -> conv1
    I0419 01:16:55.577394 25240 net.cpp:122] Setting up conv1
    I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 300 500 (14400000)
    I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 79200028
    I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer relu1
    I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer relu1
    I0419 01:16:55.577394 25240 net.cpp:406] relu1 <- conv1
    I0419 01:16:55.577394 25240 net.cpp:367] relu1 -> conv1 (in-place)
    I0419 01:16:55.577394 25240 net.cpp:122] Setting up relu1
    I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 300 500 (14400000)
    I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 136800028
    I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer norm1
    I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer norm1
    I0419 01:16:55.577394 25240 net.cpp:406] norm1 <- conv1
    I0419 01:16:55.577394 25240 net.cpp:380] norm1 -> norm1
    I0419 01:16:55.577394 25240 net.cpp:122] Setting up norm1
    I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 300 500 (14400000)
    I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 194400028
    I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer pool1
    I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer pool1
    I0419 01:16:55.577394 25240 net.cpp:406] pool1 <- norm1
    I0419 01:16:55.577394 25240 net.cpp:380] pool1 -> pool1
    I0419 01:16:55.577394 25240 net.cpp:122] Setting up pool1
    I0419 01:16:55.577394 25240 net.cpp:129] Top shape: 1 96 151 251 (3638496)
    I0419 01:16:55.577394 25240 net.cpp:137] Memory required for data: 208954012
    I0419 01:16:55.577394 25240 layer_factory.cpp:58] Creating layer conv2
    I0419 01:16:55.577394 25240 net.cpp:84] Creating Layer conv2
    I0419 01:16:55.577394 25240 net.cpp:406] conv2 <- pool1
    I0419 01:16:55.577394 25240 net.cpp:380] conv2 -> conv2
    I0419 01:16:55.593016 25240 net.cpp:122] Setting up conv2
    I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 76 126 (2451456)
    I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 218759836
    I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer relu2
    I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer relu2
    I0419 01:16:55.593016 25240 net.cpp:406] relu2 <- conv2
    I0419 01:16:55.593016 25240 net.cpp:367] relu2 -> conv2 (in-place)
    I0419 01:16:55.593016 25240 net.cpp:122] Setting up relu2
    I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 76 126 (2451456)
    I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 228565660
    I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer norm2
    I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer norm2
    I0419 01:16:55.593016 25240 net.cpp:406] norm2 <- conv2
    I0419 01:16:55.593016 25240 net.cpp:380] norm2 -> norm2
    I0419 01:16:55.593016 25240 net.cpp:122] Setting up norm2
    I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 76 126 (2451456)
    I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 238371484
    I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer pool2
    I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer pool2
    I0419 01:16:55.593016 25240 net.cpp:406] pool2 <- norm2
    I0419 01:16:55.593016 25240 net.cpp:380] pool2 -> pool2
    I0419 01:16:55.593016 25240 net.cpp:122] Setting up pool2
    I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 256 39 64 (638976)
    I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 240927388
    I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer conv3
    I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer conv3
    I0419 01:16:55.593016 25240 net.cpp:406] conv3 <- pool2
    I0419 01:16:55.593016 25240 net.cpp:380] conv3 -> conv3
    I0419 01:16:55.593016 25240 net.cpp:122] Setting up conv3
    I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464)
    I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 244761244
    I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer relu3
    I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer relu3
    I0419 01:16:55.593016 25240 net.cpp:406] relu3 <- conv3
    I0419 01:16:55.593016 25240 net.cpp:367] relu3 -> conv3 (in-place)
    I0419 01:16:55.593016 25240 net.cpp:122] Setting up relu3
    I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464)
    I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 248595100
    I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer conv4
    I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer conv4
    I0419 01:16:55.593016 25240 net.cpp:406] conv4 <- conv3
    I0419 01:16:55.593016 25240 net.cpp:380] conv4 -> conv4
    I0419 01:16:55.593016 25240 net.cpp:122] Setting up conv4
    I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464)
    I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 252428956
    I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer relu4
    I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer relu4
    I0419 01:16:55.593016 25240 net.cpp:406] relu4 <- conv4
    I0419 01:16:55.593016 25240 net.cpp:367] relu4 -> conv4 (in-place)
    I0419 01:16:55.593016 25240 net.cpp:122] Setting up relu4
    I0419 01:16:55.593016 25240 net.cpp:129] Top shape: 1 384 39 64 (958464)
    I0419 01:16:55.593016 25240 net.cpp:137] Memory required for data: 256262812
    I0419 01:16:55.593016 25240 layer_factory.cpp:58] Creating layer conv5
    I0419 01:16:55.593016 25240 net.cpp:84] Creating Layer conv5
    I0419 01:16:55.593016 25240 net.cpp:406] conv5 <- conv4
    I0419 01:16:55.593016 25240 net.cpp:380] conv5 -> conv5
    I0419 01:16:55.608644 25240 net.cpp:122] Setting up conv5
    I0419 01:16:55.608644 25240 net.cpp:129] Top shape: 1 256 39 64 (638976)
    I0419 01:16:55.608644 25240 net.cpp:137] Memory required for data: 258818716
    I0419 01:16:55.608644 25240 layer_factory.cpp:58] Creating layer relu5
    I0419 01:16:55.608644 25240 net.cpp:84] Creating Layer relu5
    I0419 01:16:55.608644 25240 net.cpp:406] relu5 <- conv5
    I0419 01:16:55.608644 25240 net.cpp:367] relu5 -> conv5 (in-place)
    I0419 01:16:55.608644 25240 net.cpp:122] Setting up relu5
    I0419 01:16:55.608644 25240 net.cpp:129] Top shape: 1 256 39 64 (638976)
    I0419 01:16:55.608644 25240 net.cpp:137] Memory required for data: 261374620
    I0419 01:16:55.608644 25240 layer_factory.cpp:58] Creating layer rpn_conv1
    I0419 01:16:55.608644 25240 net.cpp:84] Creating Layer rpn_conv1
    I0419 01:16:55.608644 25240 net.cpp:406] rpn_conv1 <- conv5
    I0419 01:16:55.608644 25240 net.cpp:380] rpn_conv1 -> rpn_conv1
    I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_conv1
    I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976)
    I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 263930524
    I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_relu1
    I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_relu1
    I0419 01:16:55.624267 25240 net.cpp:406] rpn_relu1 <- rpn_conv1
    I0419 01:16:55.624267 25240 net.cpp:367] rpn_relu1 -> rpn_conv1 (in-place)
    I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_relu1
    I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976)
    I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 266486428
    I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_conv1_rpn_relu1_0_split
    I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_conv1_rpn_relu1_0_split
    I0419 01:16:55.624267 25240 net.cpp:406] rpn_conv1_rpn_relu1_0_split <- rpn_conv1
    I0419 01:16:55.624267 25240 net.cpp:380] rpn_conv1_rpn_relu1_0_split -> rpn_conv1_rpn_relu1_0_split_0
    I0419 01:16:55.624267 25240 net.cpp:380] rpn_conv1_rpn_relu1_0_split -> rpn_conv1_rpn_relu1_0_split_1
    I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_conv1_rpn_relu1_0_split
    I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976)
    I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 256 39 64 (638976)
    I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 271598236
    I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_cls_score
    I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_cls_score
    I0419 01:16:55.624267 25240 net.cpp:406] rpn_cls_score <- rpn_conv1_rpn_relu1_0_split_0
    I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score -> rpn_cls_score
    I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_cls_score
    I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 18 39 64 (44928)
    I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 271777948
    I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_cls_score_rpn_cls_score_0_split
    I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_cls_score_rpn_cls_score_0_split
    I0419 01:16:55.624267 25240 net.cpp:406] rpn_cls_score_rpn_cls_score_0_split <- rpn_cls_score
    I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score_rpn_cls_score_0_split -> rpn_cls_score_rpn_cls_score_0_split_0
    I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score_rpn_cls_score_0_split -> rpn_cls_score_rpn_cls_score_0_split_1
    I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_cls_score_rpn_cls_score_0_split
    I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 18 39 64 (44928)
    I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 18 39 64 (44928)
    I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 272137372
    I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_bbox_pred
    I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_bbox_pred
    I0419 01:16:55.624267 25240 net.cpp:406] rpn_bbox_pred <- rpn_conv1_rpn_relu1_0_split_1
    I0419 01:16:55.624267 25240 net.cpp:380] rpn_bbox_pred -> rpn_bbox_pred
    I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_bbox_pred
    I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 36 39 64 (89856)
    I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 272496796
    I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn_cls_score_reshape
    I0419 01:16:55.624267 25240 net.cpp:84] Creating Layer rpn_cls_score_reshape
    I0419 01:16:55.624267 25240 net.cpp:406] rpn_cls_score_reshape <- rpn_cls_score_rpn_cls_score_0_split_0
    I0419 01:16:55.624267 25240 net.cpp:380] rpn_cls_score_reshape -> rpn_cls_score_reshape
    I0419 01:16:55.624267 25240 net.cpp:122] Setting up rpn_cls_score_reshape
    I0419 01:16:55.624267 25240 net.cpp:129] Top shape: 1 2 351 64 (44928)
    I0419 01:16:55.624267 25240 net.cpp:137] Memory required for data: 272676508
    I0419 01:16:55.624267 25240 layer_factory.cpp:58] Creating layer rpn-data
    I0419 01:16:55.639891 25240 net.cpp:84] Creating Layer rpn-data
    I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- rpn_cls_score_rpn_cls_score_0_split_1
    I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- gt_boxes
    I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- im_info
    I0419 01:16:55.639891 25240 net.cpp:406] rpn-data <- data_input-data_0_split_1
    I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_labels
    I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_bbox_targets
    I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_bbox_inside_weights
    I0419 01:16:55.639891 25240 net.cpp:380] rpn-data -> rpn_bbox_outside_weights
    I0419 01:16:55.639891 25240 net.cpp:122] Setting up rpn-data
    I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 1 351 64 (22464)
    I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 36 39 64 (89856)
    I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 36 39 64 (89856)
    I0419 01:16:55.639891 25240 net.cpp:129] Top shape: 1 36 39 64 (89856)
    I0419 01:16:55.639891 25240 net.cpp:137] Memory required for data: 273844636
    I0419 01:16:55.639891 25240 layer_factory.cpp:58] Creating layer rpn_loss_cls
    I0419 01:16:55.639891 25240 net.cpp:84] Creating Layer rpn_loss_cls
    I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_cls <- rpn_cls_score_reshape
    I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_cls <- rpn_labels
    I0419 01:16:55.639891 25240 net.cpp:380] rpn_loss_cls -> rpn_cls_loss
    I0419 01:16:55.639891 25240 layer_factory.cpp:58] Creating layer rpn_loss_cls
    I0419 01:16:55.639891 25240 net.cpp:122] Setting up rpn_loss_cls
    I0419 01:16:55.639891 25240 net.cpp:129] Top shape: (1)
    I0419 01:16:55.639891 25240 net.cpp:132] with loss weight 1
    I0419 01:16:55.639891 25240 net.cpp:137] Memory required for data: 273844640
    I0419 01:16:55.639891 25240 layer_factory.cpp:58] Creating layer rpn_loss_bbox
    I0419 01:16:55.639891 25240 net.cpp:84] Creating Layer rpn_loss_bbox
    I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_bbox <- rpn_bbox_pred
    I0419 01:16:55.639891 25240 net.cpp:406] rpn_loss_bbox <- rpn_bbox_targets
    I0419 01:16:55.639891 2*** Check failure stack trace: ***

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
faster-rcnn 关于预训练的问题

faster -rcnn 训练分成两步: 1. per-train 采用Image Net的数据集(1000类,一千万张图片) 2. fine-tuning 采用pascal voc _ 2007 (20类,一万张图片) 或其他数据集 问题: 如果我想训练另外一个数据集,例如做细胞检测,大概有三类,可不可以直接用第一步per-train的模型来进行初始化参数?如果不可以,大概需要多少张细胞图像做预训练?

faster rcnn训练自己的数据集 测试结果一片红色

搞了好几天才训练出来的网络,测试时输入是黑白图片,结果是一片红色,但是标注的位置大致是对的,请问这是怎么回事?是图片的问题吗?

Faster Rcnn训练自己的数据出现问题!求解

![图片说明](https://img-ask.csdn.net/upload/202001/20/1579495626_503757.png) 如上,本来只显示image invalid, skipping 后来我在train,py中加上打印了错误信息,报错如上。 我在https://blog.csdn.net/JJJKJJ/article/details/103141229 该帖子中一步一步完成了训练和demo,都没有问题,换成自己的数据以后就再也不行了,找遍全网也没有找到完全一样的问题,如何解决? 我自己的数据为1920*1080分辨率,使用labelImg标注,制作成VOC2007格式

在win10系统中使用python运行faster-rcnn训练自己的数据集出现问题

在win10系统中使用python运行faster-rcnn训练自己的数据集出现以下问题: I0417 16:38:45.682274 7396 layer_factory.hpp:77] Creating layer rpn_cls_score_rpn_cls_score*** Check failure stack trace: *** 请问该问题产生的原因有可能是哪些?应该怎么解决?

faster rcnn运行demo出现错误

faster rcnn配置好之后运行 ./tools/demo.py出现如下错误:: Check failed: registry.count(type) == 1 (0 vs. 1) Unknown layer type: Python

faster-RCNN 分类层

请问RPN分类层cls_score为什么要输出两个参数:每个anchor的前景概率和背景概率?背景概率不就等于(1-前景概率)吗?

faster rcnn的train.prototxt 没有data层

最近在用faster rcnn训练数据,需要修改train.prototxt中data层的参数num_classes,但是train.prototxt没有data层,那我该如何修改呢?

faster rcnn执行代码中的fix layers 是什么意思?

![图片说明](https://img-ask.csdn.net/upload/201810/10/1539171355_528750.png) 如图,restored的不应该是通过预训练后得到的参数吗?为什么冒号后面是0??还有最后的Fix是什么意思?意思是这几层在faster rcnn 训练时参数都不变了吗?

ubuntu跑faster_rcnn的demo不出界面

ubuntu跑faster_rcnn的demo不出界面,但是也没有报错,就是不出界面是怎么回事? ubuntu 16 cuda 8.0

在windows环境下使用自己的数据集跑faster rcnn遇到'NoneType' object is subscriptable问题(tensorflow)

在尝试根据网上教程用自己的数据集跑faster rcnn的时候,会跳出'NoneType' object is not subscriptable的问题,但查了网上说明并没有明白错误的原因......请问要如何解决?谢谢! ![图片说明](https://img-ask.csdn.net/upload/201910/20/1571582480_395221.png)

Faster-RCNN-TensorFlow-Python3-master训练后,如何得到AP,mAP的结果

查了很多资料,tf-faster-rcnn和caffe-faster-rcnn里都是用test__net.py 来评估训练结果。但是我用的是Faster-RCNN-TensorFlow-Python3-master,里面没有test_net.py。那要怎么获得AP和mAP的结果呢?

faster-rcnn迭代到一定次数停住了(自己数据集)

![图片说明](https://img-ask.csdn.net/upload/201803/11/1520732333_841301.png) 求问怎么解决?

faster-rcnn的bounding boxes是否可以改进啊

传统的bounding boxes是水平的,也就是正方形,(x y w h)怎么做成有方向的oriented bounding boxes (x1 y1 x2 y2 x3 y3 x4 y4)。 或者有没有这种有方向的bounding boxes目标检测算法,求大佬解答 类似下图 ![图片说明](https://img-ask.csdn.net/upload/201803/10/1520641104_790494.jpg)

如何可视化tensorflow版的fater rcnn的训练过程?

小白一枚,github下的faster rcnn tf版在win10系统上,代码里没有输出训练日志的语句无log文件, 想问一下怎么添加语句可以通过tensorboard显示?下为train代码 import time import tensorflow as tf import numpy as np from tensorflow.python import pywrap_tensorflow import lib.config.config as cfg from lib.datasets import roidb as rdl_roidb from lib.datasets.factory import get_imdb from lib.datasets.imdb import imdb as imdb2 from lib.layer_utils.roi_data_layer import RoIDataLayer from lib.nets.vgg16 import vgg16 from lib.utils.timer import Timer try: import cPickle as pickle except ImportError: import pickle import os def get_training_roidb(imdb): """Returns a roidb (Region of Interest database) for use in training.""" if True: print('Appending horizontally-flipped training examples...') imdb.append_flipped_images() print('done') print('Preparing training data...') rdl_roidb.prepare_roidb(imdb) print('done') return imdb.roidb def combined_roidb(imdb_names): """ Combine multiple roidbs """ def get_roidb(imdb_name): imdb = get_imdb(imdb_name) print('Loaded dataset `{:s}` for training'.format(imdb.name)) imdb.set_proposal_method("gt") print('Set proposal method: {:s}'.format("gt")) roidb = get_training_roidb(imdb) return roidb roidbs = [get_roidb(s) for s in imdb_names.split('+')] roidb = roidbs[0] if len(roidbs) > 1: for r in roidbs[1:]: roidb.extend(r) tmp = get_imdb(imdb_names.split('+')[1]) imdb = imdb2(imdb_names, tmp.classes) else: imdb = get_imdb(imdb_names) return imdb, roidb class Train: def __init__(self): # Create network if cfg.FLAGS.net == 'vgg16': self.net = vgg16(batch_size=cfg.FLAGS.ims_per_batch) else: raise NotImplementedError self.imdb, self.roidb = combined_roidb("voc_2007_trainval") self.data_layer = RoIDataLayer(self.roidb, self.imdb.num_classes) self.output_dir = cfg.get_output_dir(self.imdb, 'default') def train(self): # Create session tfconfig = tf.ConfigProto(allow_soft_placement=True) tfconfig.gpu_options.allow_growth = True sess = tf.Session(config=tfconfig) with sess.graph.as_default(): tf.set_random_seed(cfg.FLAGS.rng_seed) layers = self.net.create_architecture(sess, "TRAIN", self.imdb.num_classes, tag='default') loss = layers['total_loss'] lr = tf.Variable(cfg.FLAGS.learning_rate, trainable=False) momentum = cfg.FLAGS.momentum optimizer = tf.train.MomentumOptimizer(lr, momentum) gvs = optimizer.compute_gradients(loss) # Double bias # Double the gradient of the bias if set if cfg.FLAGS.double_bias: final_gvs = [] with tf.variable_scope('Gradient_Mult'): for grad, var in gvs: scale = 1. if cfg.FLAGS.double_bias and '/biases:' in var.name: scale *= 2. if not np.allclose(scale, 1.0): grad = tf.multiply(grad, scale) final_gvs.append((grad, var)) train_op = optimizer.apply_gradients(final_gvs) else: train_op = optimizer.apply_gradients(gvs) # We will handle the snapshots ourselves self.saver = tf.train.Saver(max_to_keep=100000) # Write the train and validation information to tensorboard # writer = tf.summary.FileWriter(self.tbdir, sess.graph) # valwriter = tf.summary.FileWriter(self.tbvaldir) # Load weights # Fresh train directly from ImageNet weights print('Loading initial model weights from {:s}'.format(cfg.FLAGS.pretrained_model)) variables = tf.global_variables() # Initialize all variables first sess.run(tf.variables_initializer(variables, name='init')) var_keep_dic = self.get_variables_in_checkpoint_file(cfg.FLAGS.pretrained_model) # Get the variables to restore, ignorizing the variables to fix variables_to_restore = self.net.get_variables_to_restore(variables, var_keep_dic) restorer = tf.train.Saver(variables_to_restore) restorer.restore(sess, cfg.FLAGS.pretrained_model) print('Loaded.') # Need to fix the variables before loading, so that the RGB weights are changed to BGR # For VGG16 it also changes the convolutional weights fc6 and fc7 to # fully connected weights self.net.fix_variables(sess, cfg.FLAGS.pretrained_model) print('Fixed.') sess.run(tf.assign(lr, cfg.FLAGS.learning_rate)) last_snapshot_iter = 0 timer = Timer() iter = last_snapshot_iter + 1 last_summary_time = time.time() while iter < cfg.FLAGS.max_iters + 1: # Learning rate if iter == cfg.FLAGS.step_size + 1: # Add snapshot here before reducing the learning rate # self.snapshot(sess, iter) sess.run(tf.assign(lr, cfg.FLAGS.learning_rate * cfg.FLAGS.gamma)) timer.tic() # Get training data, one batch at a time blobs = self.data_layer.forward() # Compute the graph without summary rpn_loss_cls, rpn_loss_box, loss_cls, loss_box, total_loss = self.net.train_step(sess, blobs, train_op) timer.toc() iter += 1 # Display training information if iter % (cfg.FLAGS.display) == 0: print('iter: %d / %d, total loss: %.6f\n >>> rpn_loss_cls: %.6f\n ' '>>> rpn_loss_box: %.6f\n >>> loss_cls: %.6f\n >>> loss_box: %.6f\n ' % \ (iter, cfg.FLAGS.max_iters, total_loss, rpn_loss_cls, rpn_loss_box, loss_cls, loss_box)) print('speed: {:.3f}s / iter'.format(timer.average_time)) if iter % cfg.FLAGS.snapshot_iterations == 0: self.snapshot(sess, iter ) def get_variables_in_checkpoint_file(self, file_name): try: reader = pywrap_tensorflow.NewCheckpointReader(file_name) var_to_shape_map = reader.get_variable_to_shape_map() return var_to_shape_map except Exception as e: # pylint: disable=broad-except print(str(e)) if "corrupted compressed block contents" in str(e): print("It's likely that your checkpoint file has been compressed " "with SNAPPY.") def snapshot(self, sess, iter): net = self.net if not os.path.exists(self.output_dir): os.makedirs(self.output_dir) # Store the model snapshot filename = 'vgg16_faster_rcnn_iter_{:d}'.format(iter) + '.ckpt' filename = os.path.join(self.output_dir, filename) self.saver.save(sess, filename) print('Wrote snapshot to: {:s}'.format(filename)) # Also store some meta information, random state, etc. nfilename = 'vgg16_faster_rcnn_iter_{:d}'.format(iter) + '.pkl' nfilename = os.path.join(self.output_dir, nfilename) # current state of numpy random st0 = np.random.get_state() # current position in the database cur = self.data_layer._cur # current shuffled indeces of the database perm = self.data_layer._perm # Dump the meta info with open(nfilename, 'wb') as fid: pickle.dump(st0, fid, pickle.HIGHEST_PROTOCOL) pickle.dump(cur, fid, pickle.HIGHEST_PROTOCOL) pickle.dump(perm, fid, pickle.HIGHEST_PROTOCOL) pickle.dump(iter, fid, pickle.HIGHEST_PROTOCOL) return filename, nfilename if __name__ == '__main__': train = Train() train.train()

Mask RCNN训练过程中loss为nan的情况(使用labelme标注的数据)

1. 不是batchsize的问题,不是学习率的问题。我已经将学习率调成了0,结果也是这样,即迭代几次之后(不是一上来就是nan),loss就为nan了,但是后面5个loss正常收敛。 2. 训练类别数与数据集中的类别数一致。 * 想问问 帖子里面有大神知道原因,希望告知!多谢!! ![图片说明](https://img-ask.csdn.net/upload/201904/01/1554111703_91454.png)

faster-rcnn显示 error using fix

恳请各位大神帮忙看看,报错如下, Error using fix Too many input arguments Error in proposal_train(line 86) fix validation data Error in Faster_RCNN_Train.do_proposal_train(line 7) model_stage.output_model_file=proposal_train(conf,dataset,imdb_train,dataset,roidb_train,… Error in script_faster_rcnn__VOC2007_ZF(line 45) model.stae1_rpn =Faster_RCNN_Train.do_proposal_train(conf_proposal,dataset,model,stage1_rpn,opts.do_val);

Windows10+Tensorflow+faster-rcnn环境搭建bao'cuo

![图片说明](https://img-ask.csdn.net/upload/201812/09/1544365223_74260.png) win10 配置Fast R-cnn,python版本是3.5.1 当运行python setup.py build_ext --inplace时。 LINK : warning LNK4001: 未指定对象文件;已使用库 LINK : warning LNK4068: 未指定 /MACHINE;默认设置为 X64 LINK : fatal error LNK1159: 没有指定输出文件 error: command 'E:\\Program Files\\VS14\\VC\\BIN\\amd64\\link.exe' failed with exit status 1159 想请问下这个错误要如何排除呢?

用faster rcnn来训练自己的数据集,在进行配置接口时出现如下错误

![图片说明](https://img-ask.csdn.net/upload/201704/16/1492329858_521128.png) 使用的是pascal 格式的数据集。 who can tell me !这个问题真是够折磨的

win10系统,py_faster_rcnn用caffe训练时出现check failure stack trace请问这是什么问题?

![图片说明](https://img-ask.csdn.net/upload/201904/19/1555649815_813823.jpg)![图片说明](https://img-ask.csdn.net/upload/201904/19/1555649896_433202.jpg)![图片说明](https://img-ask.csdn.net/upload/201904/19/1555649947_150815.jpg)![图片说明](https://img-ask.csdn.net/upload/201904/19/1555650374_139122.jpg)

学Python后到底能干什么?网友:我太难了

感觉全世界营销文都在推Python,但是找不到工作的话,又有哪个机构会站出来给我推荐工作? 笔者冷静分析多方数据,想跟大家说:关于超越老牌霸主Java,过去几年间Python一直都被寄予厚望。但是事实是虽然上升趋势,但是国内环境下,一时间是无法马上就超越Java的,也可以换句话说:超越Java只是时间问题罢。 太嚣张了会Python的人!找工作拿高薪这么简单? https://edu....

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

为什么程序猿都不愿意去外包?

分享外包的组织架构,盈利模式,亲身经历,以及根据一些外包朋友的反馈,写了这篇文章 ,希望对正在找工作的老铁有所帮助

Java校招入职华为,半年后我跑路了

何来 我,一个双非本科弟弟,有幸在 19 届的秋招中得到前东家华为(以下简称 hw)的赏识,当时秋招签订就业协议,说是入了某 java bg,之后一系列组织架构调整原因等等让人无法理解的神操作,最终毕业前夕,被通知调往其他 bg 做嵌入式开发(纯 C 语言)。 由于已至于校招末尾,之前拿到的其他 offer 又无法再收回,一时感到无力回天,只得默默接受。 毕业后,直接入职开始了嵌入式苦旅,由于从未...

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

@程序员:GitHub这个项目快薅羊毛

今天下午在朋友圈看到很多人都在发github的羊毛,一时没明白是怎么回事。 后来上百度搜索了一下,原来真有这回事,毕竟资源主义的羊毛不少啊,1000刀刷爆了朋友圈!不知道你们的朋友圈有没有看到类似的消息。 这到底是啥情况? 微软开发者平台GitHub 的一个区块链项目 Handshake ,搞了一个招募新会员的活动,面向GitHub 上前 25万名开发者派送 4,246.99 HNS币,大约价...

用python打开电脑摄像头,并把图像传回qq邮箱【Pyinstaller打包】

前言: 如何悄悄的打开朋友的摄像头,看看她最近过的怎么样,嘿嘿!这次让我带你们来实现这个功能。 注: 这个程序仅限在朋友之间开玩笑,别去搞什么违法的事情哦。 代码 发送邮件 使用python内置的email模块即可完成。导入相应的代码封装为一个send函数,顺便导入需要导入的包 注: 下面的代码有三处要修改的地方,两处写的qq邮箱地址,还有一处写的qq邮箱授权码,不知道qq邮箱授权码的可以去百度一...

做了5年运维,靠着这份监控知识体系,我从3K变成了40K

从来没讲过运维,因为我觉得运维这种东西不需要太多的知识面,然后我一个做了运维朋友告诉我大错特错,他就是从3K的运维一步步到40K的,甚至笑着说:我现在感觉自己什么都能做。 既然讲,就讲最重要的吧。 监控是整个运维乃至整个产品生命周期中最重要的一环,事前及时预警发现故障,事后提供详实的数据用于追查定位问题。目前业界有很多不错的开源产品可供选择。选择一款开源的监控系统,是一个省时省力、效率最高的方...

C++(继承):19---虚基类与虚继承(virtual)

一、菱形继承 在介绍虚继承之前介绍一下菱形继承 概念:A作为基类,B和C都继承与A。最后一个类D又继承于B和C,这样形式的继承称为菱形继承 菱形继承的缺点: 数据冗余:在D中会保存两份A的内容 访问不明确(二义性):因为D不知道是以B为中介去访问A还是以C为中介去访问A,因此在访问某些成员的时候会发生二义性 缺点的解决: 数据冗余:通过下面“虚继承”技术来解决(见下) 访问...

再不跳槽,应届毕业生拿的都比我多了!

跳槽几乎是每个人职业生涯的一部分,很多HR说“三年两跳”已经是一个跳槽频繁与否的阈值了,可为什么市面上有很多程序员不到一年就跳槽呢?他们不担心影响履历吗? PayScale之前发布的**《员工最短任期公司排行榜》中,两家码农大厂Amazon和Google**,以1年和1.1年的员工任期中位数分列第二、第四名。 PayScale:员工最短任期公司排行榜 意外的是,任期中位数极小的这两家公司,薪资...

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

华为初面+综合面试(Java技术面)附上面试题

华为面试整体流程大致分为笔试,性格测试,面试,综合面试,回学校等结果。笔试来说,华为的难度较中等,选择题难度和网易腾讯差不多。最后的代码题,相比下来就简单很多,一共3道题目,前2题很容易就AC,题目已经记不太清楚,不过难度确实不大。最后一题最后提交的代码过了75%的样例,一直没有发现剩下的25%可能存在什么坑。 笔试部分太久远,我就不怎么回忆了。直接将面试。 面试 如果说腾讯的面试是挥金如土...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

win10暴力查看wifi密码

刚才邻居打了个电话说:喂小灰,你家wifi的密码是多少,我怎么连不上了。 我。。。 我也忘了哎,就找到了一个好办法,分享给大家: 第一种情况:已经连接上的wifi,怎么知道密码? 打开:控制面板\网络和 Internet\网络连接 然后右击wifi连接的无线网卡,选择状态 然后像下图一样: 第二种情况:前提是我不知道啊,但是我以前知道密码。 此时可以利用dos命令了 1、利用netsh wlan...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

女朋友过生日,我花了20分钟给她写了一个代理服务器

女朋友说:“看你最近挺辛苦的,我送你一个礼物吧。你看看想要什么,我来准备。” 我想了半天,从书到鞋子到电子产品最后到生活用品,感觉自己什么都不缺,然后和她说:“你省省钱吧,我什么都不需要。” 她坚持要送:“不行,你一定要说一个礼物,我想送你东西了。” 于是,我认真了起来,拿起手机,上淘宝逛了几分钟,但还是没能想出来缺点什么,最后实在没办法了:“这样吧,如果你实在想送东西,那你就写一个代理服务器吧”...

记一次腾讯面试,我挂在了最熟悉不过的队列上……

腾讯后台面试,面试官问:如何自己实现队列?

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

程序员写出这样的代码,能不挨骂吗?

当你换槽填坑时,面对一个新的环境。能够快速熟练,上手实现业务需求是关键。但是,哪些因素会影响你快速上手呢?是原有代码写的不够好?还是注释写的不够好?昨夜...

带了6个月的徒弟当了面试官,而身为高级工程师的我天天修Bug......

即将毕业的应届毕业生一枚,现在只拿到了两家offer,但最近听到一些消息,其中一个offer,我这个组据说客户很少,很有可能整组被裁掉。 想问大家: 如果我刚入职这个组就被裁了怎么办呢? 大家都是什么时候知道自己要被裁了的? 面试软技能指导: BQ/Project/Resume 试听内容: 除了刷题,还有哪些技能是拿到offer不可或缺的要素 如何提升面试软实力:简历, 行为面试,沟通能...

!大部分程序员只会写3年代码

如果世界上都是这种不思进取的软件公司,那别说大部分程序员只会写 3 年代码,恐怕就没有程序员这种职业。

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

立即提问
相关内容推荐