佬你好,我在运行yolov5自定义剪枝层 预测头剪枝 https://blog.csdn.net/qq_40709711/article/details/129668765?spm=1001.2014.3001.5502%E6%97%B6%E9%81%87%E5%88%B0%E4%BA%86%E4%B8%80%E4%BA%9B%E9%97%AE%E9%A2%98%EF%BC%8C%E5%B8%8C%E6%9C%9B%E4%BD%AC%E6%95%91%E5%91%BD%E3%80%82
同样对于YOLOv5s-MobileNetV3做头部的剪枝,我的稀疏训练已完成,在剪枝时出现错误
RuntimeError: Given groups=16, weight of size [16, 1, 3, 3], expected input[1, 8, 128, 128] to have 16 channels, but got 8 channels instead
完整错误如下:
Suggested Gamma threshold should be less than 0.2302.
The corresponding prune ratio is 0.772.
Gamma value that less than 0.1678 are set to zero!
==============================================================================================
| layer name | origin channels | remaining channels |
| model.0.bn | 8 | 8 |
| model.1.conv.1 | 16 | 16 |
| model.1.conv.4 | 16 | 16 |
| model.1.conv.8 | 8 | 8 |
| model.2.conv.1 | 72 | 72 |
| model.2.conv.4 | 72 | 72 |
| model.2.conv.8 | 16 | 16 |
| model.3.conv.1 | 88 | 88 |
| model.3.conv.4 | 88 | 88 |
| model.3.conv.8 | 16 | 16 |
| model.4.conv.1 | 96 | 96 |
| model.4.conv.4 | 96 | 96 |
| model.4.conv.8 | 24 | 24 |
| model.5.conv.1 | 240 | 240 |
| model.5.conv.4 | 240 | 240 |
| model.5.conv.8 | 24 | 24 |
| model.6.conv.1 | 240 | 240 |
| model.6.conv.4 | 240 | 240 |
| model.6.conv.8 | 24 | 24 |
| model.7.conv.1 | 120 | 120 |
| model.7.conv.4 | 120 | 120 |
| model.7.conv.8 | 24 | 24 |
| model.8.conv.1 | 144 | 144 |
| model.8.conv.4 | 144 | 144 |
| model.8.conv.8 | 24 | 24 |
| model.9.conv.1 | 288 | 288 |
| model.9.conv.4 | 288 | 288 |
| model.9.conv.8 | 48 | 48 |
| model.10.conv.1 | 576 | 576 |
| model.10.conv.4 | 576 | 576 |
| model.10.conv.8 | 48 | 48 |
| model.11.conv.1 | 576 | 576 |
| model.11.conv.4 | 576 | 576 |
| model.11.conv.8 | 48 | 48 |
| model.12.bn | 256 | 14 |
| model.15.cv1.bn | 128 | 24 |
| model.15.cv2.bn | 128 | 4 |
| model.15.cv3.bn | 256 | 41 |
| model.15.m.0.cv1.bn | 128 | 30 |
| model.15.m.0.cv2.bn | 128 | 41 |
| model.16.bn | 128 | 55 |
| model.19.cv1.bn | 64 | 63 |
| model.19.cv2.bn | 64 | 15 |
| model.19.cv3.bn | 128 | 101 |
| model.19.m.0.cv1.bn | 64 | 62 |
| model.19.m.0.cv2.bn | 64 | 64 |
| model.20.bn | 128 | 43 |
| model.22.cv1.bn | 128 | 47 |
| model.22.cv2.bn | 128 | 10 |
| model.22.cv3.bn | 256 | 204 |
| model.22.m.0.cv1.bn | 128 | 46 |
| model.22.m.0.cv2.bn | 128 | 69 |
| model.23.bn | 256 | 40 |
| model.25.cv1.bn | 256 | 15 |
| model.25.cv2.bn | 256 | 30 |
| model.25.cv3.bn | 512 | 209 |
| model.25.m.0.cv1.bn | 256 | 19 |
| model.25.m.0.cv2.bn | 256 | 23 |
==============================================================================================
from n params module arguments
0 -1 1 232 models.common.conv_bn_hswish [3, 8, 2]
1 -1 1 468 models.pruned_common.MobileNet_BlockPruned[8, 16, 16, 8, False, 2, 3, 2, 1, 0]
2 -1 1 2696 models.pruned_common.MobileNet_BlockPruned[8, 72, 72, 16, False, 3, 3, 2, 0, 0]
3 -1 1 3992 models.pruned_common.MobileNet_BlockPruned[16, 88, 88, 16, True, 3, 3, 1, 0, 0]
4 -1 1 11400 models.pruned_common.MobileNet_BlockPruned[16, 96, 96, 24, False, 3, 5, 2, 1, 1]
5 -1 1 47628 models.pruned_common.MobileNet_BlockPruned[24, 240, 240, 24, True, 3, 5, 1, 1, 1]
6 -1 1 47628 models.pruned_common.MobileNet_BlockPruned[24, 240, 240, 24, True, 3, 5, 1, 1, 1]
7 -1 1 16638 models.pruned_common.MobileNet_BlockPruned[24, 120, 120, 24, False, 3, 5, 1, 1, 1]
8 -1 1 21684 models.pruned_common.MobileNet_BlockPruned[24, 144, 144, 24, True, 3, 5, 1, 1, 1]
9 -1 1 71016 models.pruned_common.MobileNet_BlockPruned[24, 288, 288, 48, False, 3, 5, 2, 1, 1]
10 -1 1 238704 models.pruned_common.MobileNet_BlockPruned[48, 576, 576, 48, True, 3, 5, 1, 1, 1]
11 -1 1 238704 models.pruned_common.MobileNet_BlockPruned[48, 576, 576, 48, True, 3, 5, 1, 1, 1]
12 -1 1 700 models.common.Conv [48, 14, 1, 1]
13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
14 [-1, 8] 1 0 models.common.Concat [1]
15 -1 1 14979 models.pruned_common.C3Pruned [38, 24, 4, 41, [[24, 30, 41]], 1, False]
16 -1 1 2365 models.common.Conv [41, 55, 1, 1]
17 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
18 [-1, 3] 1 0 models.common.Concat [1]
19 -1 1 53745 models.pruned_common.C3Pruned [71, 63, 15, 101, [[63, 62, 64]], 1, False]
20 -1 1 39173 models.common.Conv [101, 43, 3, 2]
21 [-1, 16] 1 0 models.common.Concat [1]
22 -1 1 53182 models.pruned_common.C3Pruned [98, 47, 10, 204, [[47, 46, 69]], 1, False]
23 -1 1 73520 models.common.Conv [204, 40, 3, 2]
24 [-1, 12] 1 0 models.common.Concat [1]
25 -1 1 18317 models.pruned_common.C3Pruned [54, 15, 30, 209, [[15, 19, 23]], 1, False]
26 [19, 22, 25] 1 13959 models.yolo.Detect [4, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [101, 204, 209]]
#这里都还好好的
Traceback (most recent call last):
File "c:/pyproject/YOLO/YOLOv5_MobileNetV3_Prune/yolov5_MobileNetV3_prune/prune.py", line 894, in <module>
main(opt)
File "c:/pyproject/YOLO/YOLOv5_MobileNetV3_Prune/yolov5_MobileNetV3_prune/prune.py", line 867, in main
run_prune(**vars(opt))
File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "c:/pyproject/YOLO/YOLOv5_MobileNetV3_Prune/yolov5_MobileNetV3_prune/prune.py", line 532, in run_prune
pruned_model = ModelPruned(maskbndict=maskbndict, cfg=pruned_yaml, ch=3).cuda()
File "c:\pyproject\YOLO\YOLOv5_MobileNetV3_Prune\yolov5_MobileNetV3_prune\models\yolo.py", line 272, in __init__
m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
File "c:\pyproject\YOLO\YOLOv5_MobileNetV3_Prune\yolov5_MobileNetV3_prune\models\yolo.py", line 284, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "c:\pyproject\YOLO\YOLOv5_MobileNetV3_Prune\yolov5_MobileNetV3_prune\models\yolo.py", line 307, in _forward_once
x = m(x) # run
File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "c:\pyproject\YOLO\YOLOv5_MobileNetV3_Prune\yolov5_MobileNetV3_prune\models\pruned_common.py", line 114, in forward
y = self.conv(x)
File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\container.py", line 119, in forward
input = module(input)
File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\conv.py", line 395, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=16, weight of size [16, 1, 3, 3], expected input[1, 8, 128, 128] to have 16 channels, but got 8 channels instead
希望佬看到能指导一下