weixin_39636226
weixin_39636226
2020-11-26 18:21

大佬您好,如何使用yolov3自定义网络结构进行tensorrt转换?

首先感谢您的项目,感谢您的帮助,谢谢.

我想使用u版的yolov3剪层后网络结构进行tensorrt加速,

(https://github.com/wang-xinyu/tensorrtx/blob/5d879b5886895bdf7f052e708794261679cd10db/yolov3/yolov3.cpp#L235) 是否从这里并根据自己剪层后的网络结构进行修改就可以了呢?

我剪后的yolov3,一共有70层,剪掉了12个shortcut层,yolo层分别是46层,58层,70层.我之前一直是先将darknet模型转换成caffe的格式,然后再进行tensorrt的引擎生成,但是我发现这样生成后的引擎全都是错误的识别,所以我就想使用您的项目进行直接tensorrt转换.

之前转换成caffe的网络结构是这样的:

`name: "Darkent2Caffe" input: "data" input_dim: 1 input_dim: 3 input_dim: 640 input_dim: 640

layer { bottom: "data" top: "layer1-conv" name: "layer1-conv" type: "Convolution" convolution_param { num_output: 12 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer1-conv" top: "layer1-conv" name: "layer1-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer1-conv" top: "layer1-conv" name: "layer1-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer1-conv" top: "layer1-conv" name: "layer1-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer1-conv" top: "layer2-conv" name: "layer2-conv" type: "Convolution" convolution_param { num_output: 47 kernel_size: 3 pad: 1 stride: 2 bias_term: false } } layer { bottom: "layer2-conv" top: "layer2-conv" name: "layer2-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer2-conv" top: "layer2-conv" name: "layer2-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer2-conv" top: "layer2-conv" name: "layer2-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer2-conv" top: "layer3-conv" name: "layer3-conv" type: "Convolution" convolution_param { num_output: 13 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer3-conv" top: "layer3-conv" name: "layer3-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer3-conv" top: "layer3-conv" name: "layer3-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer3-conv" top: "layer3-conv" name: "layer3-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer3-conv" top: "layer4-conv" name: "layer4-conv" type: "Convolution" convolution_param { num_output: 47 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer4-conv" top: "layer4-conv" name: "layer4-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer4-conv" top: "layer4-conv" name: "layer4-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer4-conv" top: "layer4-conv" name: "layer4-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer2-conv" bottom: "layer4-conv" top: "layer5-shortcut" name: "layer5-shortcut" type: "Eltwise" eltwise_param { operation: SUM } } layer { bottom: "layer5-shortcut" top: "layer6-conv" name: "layer6-conv" type: "Convolution" convolution_param { num_output: 115 kernel_size: 3 pad: 1 stride: 2 bias_term: false } } layer { bottom: "layer6-conv" top: "layer6-conv" name: "layer6-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer6-conv" top: "layer6-conv" name: "layer6-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer6-conv" top: "layer6-conv" name: "layer6-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer6-conv" top: "layer7-conv" name: "layer7-conv" type: "Convolution" convolution_param { num_output: 27 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer7-conv" top: "layer7-conv" name: "layer7-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer7-conv" top: "layer7-conv" name: "layer7-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer7-conv" top: "layer7-conv" name: "layer7-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer7-conv" top: "layer8-conv" name: "layer8-conv" type: "Convolution" convolution_param { num_output: 115 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer8-conv" top: "layer8-conv" name: "layer8-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer8-conv" top: "layer8-conv" name: "layer8-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer8-conv" top: "layer8-conv" name: "layer8-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer6-conv" bottom: "layer8-conv" top: "layer9-shortcut" name: "layer9-shortcut" type: "Eltwise" eltwise_param { operation: SUM } } layer { bottom: "layer9-shortcut" top: "layer10-conv" name: "layer10-conv" type: "Convolution" convolution_param { num_output: 21 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer10-conv" top: "layer10-conv" name: "layer10-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer10-conv" top: "layer10-conv" name: "layer10-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer10-conv" top: "layer10-conv" name: "layer10-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer10-conv" top: "layer11-conv" name: "layer11-conv" type: "Convolution" convolution_param { num_output: 115 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer11-conv" top: "layer11-conv" name: "layer11-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer11-conv" top: "layer11-conv" name: "layer11-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer11-conv" top: "layer11-conv" name: "layer11-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer9-shortcut" bottom: "layer11-conv" top: "layer12-shortcut" name: "layer12-shortcut" type: "Eltwise" eltwise_param { operation: SUM } } layer { bottom: "layer12-shortcut" top: "layer13-conv" name: "layer13-conv" type: "Convolution" convolution_param { num_output: 242 kernel_size: 3 pad: 1 stride: 2 bias_term: false } } layer { bottom: "layer13-conv" top: "layer13-conv" name: "layer13-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer13-conv" top: "layer13-conv" name: "layer13-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer13-conv" top: "layer13-conv" name: "layer13-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer13-conv" top: "layer14-conv" name: "layer14-conv" type: "Convolution" convolution_param { num_output: 49 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer14-conv" top: "layer14-conv" name: "layer14-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer14-conv" top: "layer14-conv" name: "layer14-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer14-conv" top: "layer14-conv" name: "layer14-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer14-conv" top: "layer15-conv" name: "layer15-conv" type: "Convolution" convolution_param { num_output: 242 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer15-conv" top: "layer15-conv" name: "layer15-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer15-conv" top: "layer15-conv" name: "layer15-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer15-conv" top: "layer15-conv" name: "layer15-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer13-conv" bottom: "layer15-conv" top: "layer16-shortcut" name: "layer16-shortcut" type: "Eltwise" eltwise_param { operation: SUM } } layer { bottom: "layer16-shortcut" top: "layer17-conv" name: "layer17-conv" type: "Convolution" convolution_param { num_output: 40 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer17-conv" top: "layer17-conv" name: "layer17-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer17-conv" top: "layer17-conv" name: "layer17-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer17-conv" top: "layer17-conv" name: "layer17-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer17-conv" top: "layer18-conv" name: "layer18-conv" type: "Convolution" convolution_param { num_output: 242 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer18-conv" top: "layer18-conv" name: "layer18-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer18-conv" top: "layer18-conv" name: "layer18-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer18-conv" top: "layer18-conv" name: "layer18-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer16-shortcut" bottom: "layer18-conv" top: "layer19-shortcut" name: "layer19-shortcut" type: "Eltwise" eltwise_param { operation: SUM } } layer { bottom: "layer19-shortcut" top: "layer20-conv" name: "layer20-conv" type: "Convolution" convolution_param { num_output: 45 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer20-conv" top: "layer20-conv" name: "layer20-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer20-conv" top: "layer20-conv" name: "layer20-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer20-conv" top: "layer20-conv" name: "layer20-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer20-conv" top: "layer21-conv" name: "layer21-conv" type: "Convolution" convolution_param { num_output: 242 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer21-conv" top: "layer21-conv" name: "layer21-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer21-conv" top: "layer21-conv" name: "layer21-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer21-conv" top: "layer21-conv" name: "layer21-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer19-shortcut" bottom: "layer21-conv" top: "layer22-shortcut" name: "layer22-shortcut" type: "Eltwise" eltwise_param { operation: SUM } } layer { bottom: "layer22-shortcut" top: "layer23-conv" name: "layer23-conv" type: "Convolution" convolution_param { num_output: 35 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer23-conv" top: "layer23-conv" name: "layer23-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer23-conv" top: "layer23-conv" name: "layer23-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer23-conv" top: "layer23-conv" name: "layer23-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer23-conv" top: "layer24-conv" name: "layer24-conv" type: "Convolution" convolution_param { num_output: 242 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer24-conv" top: "layer24-conv" name: "layer24-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer24-conv" top: "layer24-conv" name: "layer24-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer24-conv" top: "layer24-conv" name: "layer24-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer22-shortcut" bottom: "layer24-conv" top: "layer25-shortcut" name: "layer25-shortcut" type: "Eltwise" eltwise_param { operation: SUM } } layer { bottom: "layer25-shortcut" top: "layer26-conv" name: "layer26-conv" type: "Convolution" convolution_param { num_output: 36 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer26-conv" top: "layer26-conv" name: "layer26-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer26-conv" top: "layer26-conv" name: "layer26-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer26-conv" top: "layer26-conv" name: "layer26-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer26-conv" top: "layer27-conv" name: "layer27-conv" type: "Convolution" convolution_param { num_output: 242 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer27-conv" top: "layer27-conv" name: "layer27-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer27-conv" top: "layer27-conv" name: "layer27-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer27-conv" top: "layer27-conv" name: "layer27-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer25-shortcut" bottom: "layer27-conv" top: "layer28-shortcut" name: "layer28-shortcut" type: "Eltwise" eltwise_param { operation: SUM } } layer { bottom: "layer28-shortcut" top: "layer29-conv" name: "layer29-conv" type: "Convolution" convolution_param { num_output: 276 kernel_size: 3 pad: 1 stride: 2 bias_term: false } } layer { bottom: "layer29-conv" top: "layer29-conv" name: "layer29-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer29-conv" top: "layer29-conv" name: "layer29-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer29-conv" top: "layer29-conv" name: "layer29-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer29-conv" top: "layer30-conv" name: "layer30-conv" type: "Convolution" convolution_param { num_output: 15 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer30-conv" top: "layer30-conv" name: "layer30-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer30-conv" top: "layer30-conv" name: "layer30-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer30-conv" top: "layer30-conv" name: "layer30-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer30-conv" top: "layer31-conv" name: "layer31-conv" type: "Convolution" convolution_param { num_output: 276 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer31-conv" top: "layer31-conv" name: "layer31-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer31-conv" top: "layer31-conv" name: "layer31-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer31-conv" top: "layer31-conv" name: "layer31-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer29-conv" bottom: "layer31-conv" top: "layer32-shortcut" name: "layer32-shortcut" type: "Eltwise" eltwise_param { operation: SUM } } layer { bottom: "layer32-shortcut" top: "layer33-conv" name: "layer33-conv" type: "Convolution" convolution_param { num_output: 219 kernel_size: 3 pad: 1 stride: 2 bias_term: false } } layer { bottom: "layer33-conv" top: "layer33-conv" name: "layer33-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer33-conv" top: "layer33-conv" name: "layer33-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer33-conv" top: "layer33-conv" name: "layer33-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer33-conv" top: "layer34-conv" name: "layer34-conv" type: "Convolution" convolution_param { num_output: 37 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer34-conv" top: "layer34-conv" name: "layer34-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer34-conv" top: "layer34-conv" name: "layer34-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer34-conv" top: "layer34-conv" name: "layer34-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer34-conv" top: "layer35-conv" name: "layer35-conv" type: "Convolution" convolution_param { num_output: 219 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer35-conv" top: "layer35-conv" name: "layer35-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer35-conv" top: "layer35-conv" name: "layer35-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer35-conv" top: "layer35-conv" name: "layer35-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer33-conv" bottom: "layer35-conv" top: "layer36-shortcut" name: "layer36-shortcut" type: "Eltwise" eltwise_param { operation: SUM } } layer { bottom: "layer36-shortcut" top: "layer37-conv" name: "layer37-conv" type: "Convolution" convolution_param { num_output: 22 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer37-conv" top: "layer37-conv" name: "layer37-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer37-conv" top: "layer37-conv" name: "layer37-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer37-conv" top: "layer37-conv" name: "layer37-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer37-conv" top: "layer38-conv" name: "layer38-conv" type: "Convolution" convolution_param { num_output: 219 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer38-conv" top: "layer38-conv" name: "layer38-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer38-conv" top: "layer38-conv" name: "layer38-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer38-conv" top: "layer38-conv" name: "layer38-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer36-shortcut" bottom: "layer38-conv" top: "layer39-shortcut" name: "layer39-shortcut" type: "Eltwise" eltwise_param { operation: SUM } } layer { bottom: "layer39-shortcut" top: "layer40-conv" name: "layer40-conv" type: "Convolution" convolution_param { num_output: 111 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer40-conv" top: "layer40-conv" name: "layer40-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer40-conv" top: "layer40-conv" name: "layer40-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer40-conv" top: "layer40-conv" name: "layer40-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer40-conv" top: "layer41-conv" name: "layer41-conv" type: "Convolution" convolution_param { num_output: 79 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer41-conv" top: "layer41-conv" name: "layer41-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer41-conv" top: "layer41-conv" name: "layer41-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer41-conv" top: "layer41-conv" name: "layer41-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer41-conv" top: "layer42-conv" name: "layer42-conv" type: "Convolution" convolution_param { num_output: 103 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer42-conv" top: "layer42-conv" name: "layer42-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer42-conv" top: "layer42-conv" name: "layer42-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer42-conv" top: "layer42-conv" name: "layer42-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer42-conv" top: "layer43-conv" name: "layer43-conv" type: "Convolution" convolution_param { num_output: 130 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer43-conv" top: "layer43-conv" name: "layer43-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer43-conv" top: "layer43-conv" name: "layer43-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer43-conv" top: "layer43-conv" name: "layer43-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer43-conv" top: "layer44-conv" name: "layer44-conv" type: "Convolution" convolution_param { num_output: 163 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer44-conv" top: "layer44-conv" name: "layer44-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer44-conv" top: "layer44-conv" name: "layer44-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer44-conv" top: "layer44-conv" name: "layer44-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer44-conv" top: "layer45-conv" name: "layer45-conv" type: "Convolution" convolution_param { num_output: 15 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer45-conv" top: "layer45-conv" name: "layer45-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer45-conv" top: "layer45-conv" name: "layer45-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer45-conv" top: "layer45-conv" name: "layer45-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer45-conv" top: "layer46-conv" name: "layer46-conv" type: "Convolution" convolution_param { num_output: 39 kernel_size: 1 pad: 0 stride: 1 bias_term: true } } layer { bottom: "layer44-conv" top: "layer48-route" name: "layer48-route" type: "Concat" } layer { bottom: "layer48-route" top: "layer49-conv" name: "layer49-conv" type: "Convolution" convolution_param { num_output: 256 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer49-conv" top: "layer49-conv" name: "layer49-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer49-conv" top: "layer49-conv" name: "layer49-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer49-conv" top: "layer49-conv" name: "layer49-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer49-conv" top: "layer50-upsample" name: "layer50-upsample" type: "Upsample" # upsample_param { # scale: 2 # } } layer { bottom: "layer50-upsample" bottom: "layer32-shortcut" top: "layer51-route" name: "layer51-route" type: "Concat" } layer { bottom: "layer51-route" top: "layer52-conv" name: "layer52-conv" type: "Convolution" convolution_param { num_output: 86 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer52-conv" top: "layer52-conv" name: "layer52-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer52-conv" top: "layer52-conv" name: "layer52-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer52-conv" top: "layer52-conv" name: "layer52-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer52-conv" top: "layer53-conv" name: "layer53-conv" type: "Convolution" convolution_param { num_output: 84 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer53-conv" top: "layer53-conv" name: "layer53-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer53-conv" top: "layer53-conv" name: "layer53-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer53-conv" top: "layer53-conv" name: "layer53-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer53-conv" top: "layer54-conv" name: "layer54-conv" type: "Convolution" convolution_param { num_output: 68 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer54-conv" top: "layer54-conv" name: "layer54-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer54-conv" top: "layer54-conv" name: "layer54-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer54-conv" top: "layer54-conv" name: "layer54-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer54-conv" top: "layer55-conv" name: "layer55-conv" type: "Convolution" convolution_param { num_output: 110 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer55-conv" top: "layer55-conv" name: "layer55-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer55-conv" top: "layer55-conv" name: "layer55-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer55-conv" top: "layer55-conv" name: "layer55-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer55-conv" top: "layer56-conv" name: "layer56-conv" type: "Convolution" convolution_param { num_output: 80 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer56-conv" top: "layer56-conv" name: "layer56-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer56-conv" top: "layer56-conv" name: "layer56-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer56-conv" top: "layer56-conv" name: "layer56-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer56-conv" top: "layer57-conv" name: "layer57-conv" type: "Convolution" convolution_param { num_output: 64 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer57-conv" top: "layer57-conv" name: "layer57-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer57-conv" top: "layer57-conv" name: "layer57-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer57-conv" top: "layer57-conv" name: "layer57-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer57-conv" top: "layer58-conv" name: "layer58-conv" type: "Convolution" convolution_param { num_output: 39 kernel_size: 1 pad: 0 stride: 1 bias_term: true } } layer { bottom: "layer56-conv" top: "layer60-route" name: "layer60-route" type: "Concat" } layer { bottom: "layer60-route" top: "layer61-conv" name: "layer61-conv" type: "Convolution" convolution_param { num_output: 128 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer61-conv" top: "layer61-conv" name: "layer61-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer61-conv" top: "layer61-conv" name: "layer61-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer61-conv" top: "layer61-conv" name: "layer61-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer61-conv" top: "layer62-upsample" name: "layer62-upsample" type: "Upsample" # upsample_param { # scale: 2 # } } layer { bottom: "layer62-upsample" bottom: "layer28-shortcut" top: "layer63-route" name: "layer63-route" type: "Concat" } layer { bottom: "layer63-route" top: "layer64-conv" name: "layer64-conv" type: "Convolution" convolution_param { num_output: 45 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer64-conv" top: "layer64-conv" name: "layer64-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer64-conv" top: "layer64-conv" name: "layer64-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer64-conv" top: "layer64-conv" name: "layer64-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer64-conv" top: "layer65-conv" name: "layer65-conv" type: "Convolution" convolution_param { num_output: 70 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer65-conv" top: "layer65-conv" name: "layer65-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer65-conv" top: "layer65-conv" name: "layer65-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer65-conv" top: "layer65-conv" name: "layer65-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer65-conv" top: "layer66-conv" name: "layer66-conv" type: "Convolution" convolution_param { num_output: 42 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer66-conv" top: "layer66-conv" name: "layer66-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer66-conv" top: "layer66-conv" name: "layer66-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer66-conv" top: "layer66-conv" name: "layer66-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer66-conv" top: "layer67-conv" name: "layer67-conv" type: "Convolution" convolution_param { num_output: 75 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer67-conv" top: "layer67-conv" name: "layer67-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer67-conv" top: "layer67-conv" name: "layer67-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer67-conv" top: "layer67-conv" name: "layer67-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer67-conv" top: "layer68-conv" name: "layer68-conv" type: "Convolution" convolution_param { num_output: 49 kernel_size: 1 pad: 0 stride: 1 bias_term: false } } layer { bottom: "layer68-conv" top: "layer68-conv" name: "layer68-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer68-conv" top: "layer68-conv" name: "layer68-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer68-conv" top: "layer68-conv" name: "layer68-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer68-conv" top: "layer69-conv" name: "layer69-conv" type: "Convolution" convolution_param { num_output: 72 kernel_size: 3 pad: 1 stride: 1 bias_term: false } } layer { bottom: "layer69-conv" top: "layer69-conv" name: "layer69-bn" type: "BatchNorm" batch_norm_param { use_global_stats: true } } layer { bottom: "layer69-conv" top: "layer69-conv" name: "layer69-scale" type: "Scale" scale_param { bias_term: true } } layer { bottom: "layer69-conv" top: "layer69-conv" name: "layer69-act" type: "ReLU" relu_param { negative_slope: 0.1 } } layer { bottom: "layer69-conv" top: "layer70-conv" name: "layer70-conv" type: "Convolution" convolution_param { num_output: 39 kernel_size: 1 pad: 0 stride: 1 bias_term: true } } layer { #the bottoms are the yolo input layers bottom: "layer46-conv" bottom: "layer58-conv" bottom: "layer70-conv" top: "yolo-det" name: "yolo-det" type: "Yolo" }`

该提问来源于开源项目:wang-xinyu/tensorrtx

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

6条回答

  • weixin_39876877 weixin_39876877 5月前

    需要修改yolov3.cpp 里面的createEngine(),重新定义网络结构

    点赞 评论 复制链接分享
  • weixin_39636226 weixin_39636226 5月前

    需要修改yolov3.cpp里面的createEngine(),重新定义网络结构

    那请问大佬 createEngine()里面的网络结构 参数第四位是什么意思呀? 例如第一层 auto lr0 = convBnLeaky(network, weightMap, *data, 32, 3, 1, 1, 0) 32=卷积核数量

    3=卷积核大小

    1=步长

    1=这个是什么意思呢?我以为是pad,但是我发现不管是原始结构的pad和我减层后的pad全都是1,但是我发现您的网络结构里面有1有0.

    0=这个是不是层数的意思?

    感谢您的帮助

    点赞 评论 复制链接分享
  • weixin_39876877 weixin_39876877 5月前

    convBnLeaky(INetworkDefinition *network, std::map<:string weights>& weightMap, ITensor& input, int outch, int ksize, int s, int p, int linx)</:string>

    p是padding,应该是有0的情况

    linx是layer名字的索引,比如0对应的module_list.0

    点赞 评论 复制链接分享
  • weixin_39636226 weixin_39636226 5月前

    convBnLeaky(INetworkDefinition *network, std::map<:string weights>& weightMap, ITensor& input, int outch, int ksize, int s, int p, int linx)</:string>

    p是padding,应该是有0的情况

    linx是layer名字的索引,比如0对应的module_list.0

    谢谢大佬,那我只需要按照我的网络结构里面padding显示的来修改就可以了吧 不用按照您的pad参数来设置对吗. 另外我又查看了一下 u版的yolov3.cfg,我发现里面确实没有pad=0

    点赞 评论 复制链接分享
  • weixin_39876877 weixin_39876877 5月前

    这个p不是cfg里面的pad参数,是实际卷基层的padding参数

    看看源码https://github.com/ultralytics/yolov3/blob/master/models.py#L31

    
            if mdef['type'] == 'convolutional':
                bn = mdef['batch_normalize']
                filters = mdef['filters']
                k = mdef['size']  # kernel size
                stride = mdef['stride'] if 'stride' in mdef else (mdef['stride_y'], mdef['stride_x'])
                if isinstance(k, int):  # single-size conv
                    modules.add_module('Conv2d', nn.Conv2d(in_channels=output_filters[-1],
                                                           out_channels=filters,
                                                           kernel_size=k,
                                                           stride=stride,
                                                           padding=k // 2 if mdef['pad'] else 0,
                                                           groups=mdef['groups'] if 'groups' in mdef else 1,
                                                           bias=not bn))
    
    点赞 评论 复制链接分享
  • weixin_39636226 weixin_39636226 5月前

    这个p不是cfg里面的pad参数,是实际卷基层的padding参数

    看看原始资源https://github.com/ultralytics/yolov3/blob/master/models.py#L31

    
            if mdef['type'] == 'convolutional':
                bn = mdef['batch_normalize']
                filters = mdef['filters']
                k = mdef['size']  # kernel size
                stride = mdef['stride'] if 'stride' in mdef else (mdef['stride_y'], mdef['stride_x'])
                if isinstance(k, int):  # single-size conv
                    modules.add_module('Conv2d', nn.Conv2d(in_channels=output_filters[-1],
                                                           out_channels=filters,
                                                           kernel_size=k,
                                                           stride=stride,
                                                           padding=k // 2 if mdef['pad'] else 0,
                                                           groups=mdef['groups'] if 'groups' in mdef else 1,
                                                           bias=not bn))
    

    十分感谢大佬 我得到了具体的padding值 . 再次向您表示感谢

    点赞 评论 复制链接分享

相关推荐