AZSUN199 2024-05-10 16:34 采纳率: 0%
浏览 27
已结题

yolov5自定义Prune报错,如何解决?

佬你好,我在运行yolov5自定义剪枝层 预测头剪枝 https://blog.csdn.net/qq_40709711/article/details/129668765?spm=1001.2014.3001.5502%E6%97%B6%E9%81%87%E5%88%B0%E4%BA%86%E4%B8%80%E4%BA%9B%E9%97%AE%E9%A2%98%EF%BC%8C%E5%B8%8C%E6%9C%9B%E4%BD%AC%E6%95%91%E5%91%BD%E3%80%82
同样对于YOLOv5s-MobileNetV3做头部的剪枝,我的稀疏训练已完成,在剪枝时出现错误
RuntimeError: Given groups=16, weight of size [16, 1, 3, 3], expected input[1, 8, 128, 128] to have 16 channels, but got 8 channels instead

完整错误如下:

Suggested Gamma threshold should be less than 0.2302.
The corresponding prune ratio is 0.772.
Gamma value that less than 0.1678 are set to zero!
==============================================================================================
|       layer name               |         origin channels     |         remaining channels  |
|       model.0.bn               |         8                   |         8                   |
|       model.1.conv.1           |         16                  |         16                  |
|       model.1.conv.4           |         16                  |         16                  |
|       model.1.conv.8           |         8                   |         8                   |
|       model.2.conv.1           |         72                  |         72                  |
|       model.2.conv.4           |         72                  |         72                  |
|       model.2.conv.8           |         16                  |         16                  |
|       model.3.conv.1           |         88                  |         88                  |
|       model.3.conv.4           |         88                  |         88                  |
|       model.3.conv.8           |         16                  |         16                  |
|       model.4.conv.1           |         96                  |         96                  |
|       model.4.conv.4           |         96                  |         96                  |
|       model.4.conv.8           |         24                  |         24                  |
|       model.5.conv.1           |         240                 |         240                 |
|       model.5.conv.4           |         240                 |         240                 |
|       model.5.conv.8           |         24                  |         24                  |
|       model.6.conv.1           |         240                 |         240                 |
|       model.6.conv.4           |         240                 |         240                 |
|       model.6.conv.8           |         24                  |         24                  |
|       model.7.conv.1           |         120                 |         120                 |
|       model.7.conv.4           |         120                 |         120                 |
|       model.7.conv.8           |         24                  |         24                  |
|       model.8.conv.1           |         144                 |         144                 |
|       model.8.conv.4           |         144                 |         144                 |
|       model.8.conv.8           |         24                  |         24                  |
|       model.9.conv.1           |         288                 |         288                 |
|       model.9.conv.4           |         288                 |         288                 |
|       model.9.conv.8           |         48                  |         48                  |
|       model.10.conv.1          |         576                 |         576                 |
|       model.10.conv.4          |         576                 |         576                 |
|       model.10.conv.8          |         48                  |         48                  |
|       model.11.conv.1          |         576                 |         576                 |
|       model.11.conv.4          |         576                 |         576                 |
|       model.11.conv.8          |         48                  |         48                  |
|       model.12.bn              |         256                 |         14                  |
|       model.15.cv1.bn          |         128                 |         24                  |
|       model.15.cv2.bn          |         128                 |         4                   |
|       model.15.cv3.bn          |         256                 |         41                  |
|       model.15.m.0.cv1.bn      |         128                 |         30                  |
|       model.15.m.0.cv2.bn      |         128                 |         41                  |
|       model.16.bn              |         128                 |         55                  |
|       model.19.cv1.bn          |         64                  |         63                  |
|       model.19.cv2.bn          |         64                  |         15                  |
|       model.19.cv3.bn          |         128                 |         101                 |
|       model.19.m.0.cv1.bn      |         64                  |         62                  |
|       model.19.m.0.cv2.bn      |         64                  |         64                  |
|       model.20.bn              |         128                 |         43                  |
|       model.22.cv1.bn          |         128                 |         47                  |
|       model.22.cv2.bn          |         128                 |         10                  |
|       model.22.cv3.bn          |         256                 |         204                 |
|       model.22.m.0.cv1.bn      |         128                 |         46                  |
|       model.22.m.0.cv2.bn      |         128                 |         69                  |
|       model.23.bn              |         256                 |         40                  |
|       model.25.cv1.bn          |         256                 |         15                  |
|       model.25.cv2.bn          |         256                 |         30                  |
|       model.25.cv3.bn          |         512                 |         209                 |
|       model.25.m.0.cv1.bn      |         256                 |         19                  |
|       model.25.m.0.cv2.bn      |         256                 |         23                  |
==============================================================================================
                 from  n    params  module                                  arguments
  0                -1  1       232  models.common.conv_bn_hswish            [3, 8, 2]
  1                -1  1       468  models.pruned_common.MobileNet_BlockPruned[8, 16, 16, 8, False, 2, 3, 2, 1, 0]
  2                -1  1      2696  models.pruned_common.MobileNet_BlockPruned[8, 72, 72, 16, False, 3, 3, 2, 0, 0]
  3                -1  1      3992  models.pruned_common.MobileNet_BlockPruned[16, 88, 88, 16, True, 3, 3, 1, 0, 0]
  4                -1  1     11400  models.pruned_common.MobileNet_BlockPruned[16, 96, 96, 24, False, 3, 5, 2, 1, 1]
  5                -1  1     47628  models.pruned_common.MobileNet_BlockPruned[24, 240, 240, 24, True, 3, 5, 1, 1, 1]
  6                -1  1     47628  models.pruned_common.MobileNet_BlockPruned[24, 240, 240, 24, True, 3, 5, 1, 1, 1]
  7                -1  1     16638  models.pruned_common.MobileNet_BlockPruned[24, 120, 120, 24, False, 3, 5, 1, 1, 1]
  8                -1  1     21684  models.pruned_common.MobileNet_BlockPruned[24, 144, 144, 24, True, 3, 5, 1, 1, 1]
  9                -1  1     71016  models.pruned_common.MobileNet_BlockPruned[24, 288, 288, 48, False, 3, 5, 2, 1, 1]
 10                -1  1    238704  models.pruned_common.MobileNet_BlockPruned[48, 576, 576, 48, True, 3, 5, 1, 1, 1]
 11                -1  1    238704  models.pruned_common.MobileNet_BlockPruned[48, 576, 576, 48, True, 3, 5, 1, 1, 1]
 12                -1  1       700  models.common.Conv                      [48, 14, 1, 1]
 13                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 14           [-1, 8]  1         0  models.common.Concat                    [1]
 15                -1  1     14979  models.pruned_common.C3Pruned           [38, 24, 4, 41, [[24, 30, 41]], 1, False]
 16                -1  1      2365  models.common.Conv                      [41, 55, 1, 1]
 17                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']
 18           [-1, 3]  1         0  models.common.Concat                    [1]
 19                -1  1     53745  models.pruned_common.C3Pruned           [71, 63, 15, 101, [[63, 62, 64]], 1, False]
 20                -1  1     39173  models.common.Conv                      [101, 43, 3, 2]
 21          [-1, 16]  1         0  models.common.Concat                    [1]
 22                -1  1     53182  models.pruned_common.C3Pruned           [98, 47, 10, 204, [[47, 46, 69]], 1, False]
 23                -1  1     73520  models.common.Conv                      [204, 40, 3, 2]
 24          [-1, 12]  1         0  models.common.Concat                    [1]
 25                -1  1     18317  models.pruned_common.C3Pruned           [54, 15, 30, 209, [[15, 19, 23]], 1, False]
 26      [19, 22, 25]  1     13959  models.yolo.Detect                      [4, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [101, 204, 209]]
#这里都还好好的

Traceback (most recent call last):
  File "c:/pyproject/YOLO/YOLOv5_MobileNetV3_Prune/yolov5_MobileNetV3_prune/prune.py", line 894, in <module>
    main(opt)
  File "c:/pyproject/YOLO/YOLOv5_MobileNetV3_Prune/yolov5_MobileNetV3_prune/prune.py", line 867, in main
    run_prune(**vars(opt))
  File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "c:/pyproject/YOLO/YOLOv5_MobileNetV3_Prune/yolov5_MobileNetV3_prune/prune.py", line 532, in run_prune
    pruned_model = ModelPruned(maskbndict=maskbndict, cfg=pruned_yaml, ch=3).cuda()
  File "c:\pyproject\YOLO\YOLOv5_MobileNetV3_Prune\yolov5_MobileNetV3_prune\models\yolo.py", line 272, in __init__
    m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))])  # forward
  File "c:\pyproject\YOLO\YOLOv5_MobileNetV3_Prune\yolov5_MobileNetV3_prune\models\yolo.py", line 284, in forward
    return self._forward_once(x, profile, visualize)  # single-scale inference, train
  File "c:\pyproject\YOLO\YOLOv5_MobileNetV3_Prune\yolov5_MobileNetV3_prune\models\yolo.py", line 307, in _forward_once
    x = m(x)  # run
  File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "c:\pyproject\YOLO\YOLOv5_MobileNetV3_Prune\yolov5_MobileNetV3_prune\models\pruned_common.py", line 114, in forward
    y = self.conv(x)
  File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\container.py", line 119, in forward
    input = module(input)
  File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\conv.py", line 399, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:\Users\25198\anaconda3\envs\yolo_v5\lib\site-packages\torch\nn\modules\conv.py", line 395, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=16, weight of size [16, 1, 3, 3], expected input[1, 8, 128, 128] to have 16 channels, but got 8 channels instead        

希望佬看到能指导一下

  • 写回答

30条回答 默认 最新

  • 专家-郭老师 Java领域新星创作者 2024-05-10 16:35
    关注
    获得0.40元问题酬金

    🕒下午好,题主!目前:🟢[在线]
    📢参考通义千问、由郭老师整理提供!
    📣今日诗号:山有木兮木有枝,心悦君兮君不知。

    根据您提供的错误信息,看起来您在使用YOLOv5进行MobileNetV3头部的模型剪枝时遇到了问题。错误指出,当执行卷积操作时,预期输入通道数量与实际输入通道数量不匹配。模型中的某个卷积层期望输入有16个通道,但实际输入只有8个。

    要解决这个问题,您需要确保在剪枝过程中正确地设置了输入通道的数量。这可能涉及到以下步骤:

    1. 检查剪枝配置: 确保在剪枝配置文件(pruned_yaml)中,您为每个卷积层定义了正确的输出通道数。如果某些层被剪枝,可能会导致输出通道数减少,因此需要更新这些值以匹配剪枝后的网络结构。

    2. 检查剪枝过程: 在剪枝代码中,确保在生成剪枝后的模型时,将剪枝结果(maskbndict)正确应用到了每个卷积层的输出通道数上。如果您的剪枝算法没有自动处理这种情况,可能需要手动调整。

    3. 输入调整: 如果您在运行预测时使用了不同的输入通道数(例如,原本是8通道,现在可能是16通道),请确保输入数据也相应地调整。如果原始输入通道数已经更改,请相应地更新模型。

    4. 代码审查: 检查yolov5_MobileNetV3_prune/models/pruned_common.py中的forward方法,确保在调用卷积层之前,输入数据的通道数已经被正确调整。

    5. 错误追踪: 如果以上步骤都无法解决问题,尝试在代码中添加更多的打印语句来跟踪输入和输出的通道数,以便更好地理解问题所在。

    建议在剪枝前先验证模型结构是否符合预期,并且在剪枝后再次验证模型结构,以确保所有层的通道数设置正确。希望这些提示能够帮助您解决问题!如果需要更具体的代码示例,您可以提供更多信息,我会尽力提供帮助。

    评论

报告相同问题?

问题事件

  • 系统已结题 5月18日
  • 创建了问题 5月10日

悬赏问题

  • ¥15 在获取boss直聘的聊天的时候只能获取到前40条聊天数据
  • ¥20 关于URL获取的参数,无法执行二选一查询
  • ¥15 液位控制,当液位超过高限时常开触点59闭合,直到液位低于低限时,断开
  • ¥15 marlin编译错误,如何解决?
  • ¥15 有偿四位数,节约算法和扫描算法
  • ¥15 VUE项目怎么运行,系统打不开
  • ¥50 pointpillars等目标检测算法怎么融合注意力机制
  • ¥20 Vs code Mac系统 PHP Debug调试环境配置
  • ¥60 大一项目课,微信小程序
  • ¥15 求视频摘要youtube和ovp数据集