别来BUG求求了 2022-05-30 00:00 采纳率: 91.7%
浏览 202
已结题

【深度学习】使用自己写的VGG16网络训练精度不提升

本人根据VGG16的模型自己写了个VGG16的网络,但是用它进行训练的时候,精度一直不变,换用pytorch官方的VGG16就精度会从低到高变化,所以感觉问题不在我的训练方法上,请问我这个网络模型哪里有问题吗?下面是网络代码

import torch
from torch import nn


class MyVGG16(nn.Module):
    def __init__(self, class_number):
        super(MyVGG16, self).__init__()
        self.block1 = nn.Sequential(
            nn.Conv2d(in_channels=3, out_channels=64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(inplace=True),
            nn.Conv2d(in_channels=64, out_channels=64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
        )
        self.block2 = nn.Sequential(
            nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(in_channels=128, out_channels=128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1)
        )
        self.block3 = nn.Sequential(
            nn.Conv2d(in_channels=128, out_channels=256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(in_channels=256, out_channels=256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(in_channels=256, out_channels=256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1),
        )
        self.block4 = nn.Sequential(
            nn.Conv2d(256, 512, kernel_size=(3,3), padding=1),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=(3,3), padding=1),
            nn.ReLU(),
            nn.Conv2d(512, 512, kernel_size=(3,3), padding=1),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0),
        )
        self.block4 = nn.Sequential(
            nn.Conv2d(in_channels=256, out_channels=512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1),
        )
        self.block5 = nn.Sequential(
            nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1),
        )
        self.block6 = nn.Sequential(
            nn.Flatten(),
            nn.Linear(in_features=512 * 7 * 7, out_features=4096),
            nn.ReLU(),
            nn.Dropout(p=0.5, inplace=False),
            nn.Linear(in_features=4096, out_features=4096),
            nn.ReLU(),
            nn.Dropout(p=0.5, inplace=False),
            nn.Linear(in_features=4096, out_features=1000),
             
            # 因为这是一个class_number分类问题,所以输出class_number
            nn.Linear(1000, class_number),
            
            # UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
            # 加上dim消除警告
            nn.Softmax(dim=1)
        )

    def forward(self, input):
        output = self.block1(input)
        output = self.block2(output)
        output = self.block3(output)
        output = self.block4(output)
        output = self.block5(output)
        output = self.block6(output)
        return output
  • 写回答

2条回答 默认 最新

  • 别来BUG求求了 2022-06-06 06:59
    关注

    删除最后的softmax层,在内个relu之前加归一化就好了

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

问题事件

  • 系统已结题 6月14日
  • 已采纳回答 6月6日
  • 创建了问题 5月30日

悬赏问题

  • ¥15 stm32c8t6工程,使用hal库
  • ¥100 有偿求易语言word文档取doc和docx页数方法或模块
  • ¥15 找能接spark如图片的,可议价
  • ¥15 关于#单片机#的问题,请各位专家解答!
  • ¥15 博通raid 的写入速度很高也很低
  • ¥15 目标计数模型训练过程中的问题
  • ¥100 Acess连接SQL 数据库后 不能用中文筛选
  • ¥15 用友U9Cloud的webapi
  • ¥20 电脑拓展屏桌面被莫名遮挡
  • ¥20 ensp,用局域网解决