我的AI之路 2024-05-01 20:53 采纳率: 58.3%
浏览 13
已结题

训练LPRNet时Acc提不上去是怎么回事?

训练LPRNet时Acc一直不变,训练集大小为19w+,均为大小固定的车牌,不清楚为什么Acc一直提不上去

Epoch:31 || epochiter: 100/380|| Totel iter 11500 || Loss: 0.7277||Batch time: 0.0524 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 120/380|| Totel iter 11520 || Loss: 0.6873||Batch time: 0.0438 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 140/380|| Totel iter 11540 || Loss: 0.7170||Batch time: 0.7150 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 160/380|| Totel iter 11560 || Loss: 0.7471||Batch time: 0.0326 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 180/380|| Totel iter 11580 || Loss: 0.7248||Batch time: 0.5828 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 200/380|| Totel iter 11600 || Loss: 0.7089||Batch time: 0.0783 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 220/380|| Totel iter 11620 || Loss: 0.6404||Batch time: 0.5742 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 240/380|| Totel iter 11640 || Loss: 0.8387||Batch time: 0.0341 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 260/380|| Totel iter 11660 || Loss: 0.7818||Batch time: 0.4740 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 280/380|| Totel iter 11680 || Loss: 0.7460||Batch time: 0.0571 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 300/380|| Totel iter 11700 || Loss: 0.7408||Batch time: 0.4415 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 320/380|| Totel iter 11720 || Loss: 0.6865||Batch time: 0.0570 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 340/380|| Totel iter 11740 || Loss: 0.8336||Batch time: 0.2993 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 360/380|| Totel iter 11760 || Loss: 0.8447||Batch time: 0.1269 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 0/380|| Totel iter 11780 || Loss: 0.7486||Batch time: 3.5372 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 20/380|| Totel iter 11800 || Loss: 0.7275||Batch time: 0.0326 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 40/380|| Totel iter 11820 || Loss: 0.7506||Batch time: 2.9425 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 60/380|| Totel iter 11840 || Loss: 0.7255||Batch time: 0.0319 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 80/380|| Totel iter 11860 || Loss: 0.7473||Batch time: 2.4504 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 100/380|| Totel iter 11880 || Loss: 0.7277||Batch time: 0.0316 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 120/380|| Totel iter 11900 || Loss: 0.6873||Batch time: 2.6045 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 140/380|| Totel iter 11920 || Loss: 0.7170||Batch time: 0.0319 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 160/380|| Totel iter 11940 || Loss: 0.7471||Batch time: 3.2460 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 180/380|| Totel iter 11960 || Loss: 0.7248||Batch time: 0.0314 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 200/380|| Totel iter 11980 || Loss: 0.7089||Batch time: 2.8809 sec. ||LR: 0.01000000
[Info] Test Accuracy: 0.27200255102040816 [6824:10958:7306:25088]


Epoch:79 || epochiter: 360/380|| Totel iter 30000 || Loss: 0.8447||Batch time: 0.0585 sec. ||LR: 0.00010000
Epoch:80 || epochiter: 0/380|| Totel iter 30020 || Loss: 0.7486||Batch time: 3.3338 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 20/380|| Totel iter 30040 || Loss: 0.7275||Batch time: 0.0322 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 40/380|| Totel iter 30060 || Loss: 0.7506||Batch time: 2.8060 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 60/380|| Totel iter 30080 || Loss: 0.7255||Batch time: 0.0317 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 80/380|| Totel iter 30100 || Loss: 0.7473||Batch time: 3.6339 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 100/380|| Totel iter 30120 || Loss: 0.7277||Batch time: 0.0319 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 120/380|| Totel iter 30140 || Loss: 0.6873||Batch time: 4.0021 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 140/380|| Totel iter 30160 || Loss: 0.7170||Batch time: 0.0337 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 160/380|| Totel iter 30180 || Loss: 0.7471||Batch time: 3.4872 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 180/380|| Totel iter 30200 || Loss: 0.7248||Batch time: 0.1185 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 200/380|| Totel iter 30220 || Loss: 0.7089||Batch time: 4.2137 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 220/380|| Totel iter 30240 || Loss: 0.6404||Batch time: 0.0318 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 240/380|| Totel iter 30260 || Loss: 0.8387||Batch time: 3.2628 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 260/380|| Totel iter 30280 || Loss: 0.7818||Batch time: 0.0317 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 280/380|| Totel iter 30300 || Loss: 0.7460||Batch time: 6.3825 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 300/380|| Totel iter 30320 || Loss: 0.7408||Batch time: 0.0321 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 320/380|| Totel iter 30340 || Loss: 0.6865||Batch time: 4.0092 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 340/380|| Totel iter 30360 || Loss: 0.8336||Batch time: 0.0326 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 360/380|| Totel iter 30380 || Loss: 0.8447||Batch time: 4.7633 sec. ||LR: 0.00001000
Epoch:81 || epochiter: 0/380|| Totel iter 30400 || Loss: 0.7486||Batch time: 4.7302 sec. ||LR: 0.00001000
Epoch:81 || epochiter: 20/380|| Totel iter 30420 || Loss: 0.7275||Batch time: 0.0322 sec. ||LR: 0.00001000
Epoch:81 || epochiter: 40/380|| Totel iter 30440 || Loss: 0.7506||Batch time: 5.1820 sec. ||LR: 0.00001000
Epoch:81 || epochiter: 60/380|| Totel iter 30460 || Loss: 0.7255||Batch time: 0.0336 sec. ||LR: 0.00001000
Epoch:81 || epochiter: 80/380|| Totel iter 30480 || Loss: 0.7473||Batch time: 3.0441 sec. ||LR: 0.00001000
[Info] Test Accuracy: 0.27200255102040816 [6824:10958:7306:25088]
[Info] Test Speed: 0.0011592100981020928s 1/25229]

以下是训练过程中的相关参数:

parser = argparse.ArgumentParser(description='parameters to train net')
    parser.add_argument('--max_epoch', default=140, help='epoch to train the network')
    parser.add_argument('--img_size', default=[94, 24], help='the image size')
    parser.add_argument('--train_img_dirs', default="/mnt/LPRNet_Pytorch-master/rec_images/train", help='the train images path')
    parser.add_argument('--test_img_dirs', default="/mnt/LPRNet_Pytorch-master/rec_images/test", help='the test images path')
    parser.add_argument('--dropout_rate', default=0.5, help='dropout rate.')
    parser.add_argument('--learning_rate', default=0.1, help='base value of learning rate.')
    parser.add_argument('--lpr_max_len', default=8, help='license plate number max length.')
    parser.add_argument('--train_batch_size', default=512, help='training batch size.')
    parser.add_argument('--test_batch_size', default=512, help='testing batch size.')
    parser.add_argument('--phase_train', default=True, type=bool, help='train or test phase flag.')
    parser.add_argument('--num_workers', default=8, type=int, help='Number of workers used in dataloading')
    parser.add_argument('--cuda', default=True, type=bool, help='Use cuda to train model')
    parser.add_argument('--resume_epoch', default=0, type=int, help='resume iter for retraining')
    parser.add_argument('--save_interval', default=500, type=int, help='interval for save model state dict')
    parser.add_argument('--test_interval', default=500, type=int, help='interval for evaluate')
    parser.add_argument('--momentum', default=0.9, type=float, help='momentum')
    parser.add_argument('--weight_decay', default=2e-5, type=float, help='Weight decay for SGD')
    parser.add_argument('--lr_schedule', default=[20,40,60,80,100,120], help='schedule for learning rate.')
    parser.add_argument('--save_folder', default='/mnt/LPRNet_Pytorch-master//weights/', help='Location to save checkpoint models')
    parser.add_argument('--pretrained_model', default='/mnt/LPRNet_Pytorch-master/weights/Final_LPRNet_model.pth', help='no pretrain')


  • 写回答

6条回答 默认 最新

  • GISer Liu 2024-05-01 20:56
    关注

    该回答引用自GPT-3.5,由博主GISer Liu编写:

    根据您提供的信息,训练LPRNet时遇到准确率(Accuracy)一直无法提升的问题。以下是我针对这个问题的分析和解决方案:
    问题分析:

    1. 数据集问题: 您提到训练集大小为19w+,这是一个相对较大的数据集,但问题可能出现在数据集本身。即使数据集很大,如果数据质量不高或者标签不准确,也会导致模型无法取得良好的准确率。
    2. 模型架构与参数设置: 您没有提供关于LPRNet的具体架构和参数设置的信息。模型架构的选择和参数设置的合理性会直接影响模型的性能。
    3. 训练过程问题: 在您提供的训练日志中,准确率没有提升的现象比较突出。可能是由于学习率过高或者训练过程中出现了过拟合等问题。

    解决方案:

    1. 数据集质量检查: 首先,需要对训练数据集进行仔细的质量检查。确保数据集中的车牌图像标签正确且完整,图像质量良好,并且覆盖了各种可能的场景和变化。
    2. 模型调优: 对于模型架构和参数设置,可以尝试进行调优。这可能包括尝试不同的网络架构、调整超参数(如学习率、正则化项等)以及采用数据增强技术等。
    3. 学习率调整: 您的训练日志中显示学习率在训练过程中并没有进行调整。考虑尝试使用学习率调度器来动态调整学习率,以更好地指导模型的训练过程。
    4. 监控过拟合: 监控模型是否出现了过拟合的现象。可以通过在训练过程中使用验证集来实时监控模型的性能,并采取相应的措施,如提前停止训练或者使用正则化方法来防止过拟合。
    5. 细致调试与分析: 在调试过程中,可以详细分析训练过程中的损失函数变化情况、模型在训练集和验证集上的表现等信息,以帮助定位问题所在。
      综上所述,您可以先从数据集质量入手,然后对模型架构、参数设置和训练过程进行逐步调优和调试,以提高模型的准确率。如果需要进一步的技术支持或者有其他问题,欢迎继续交流!

    如果该回答解决了您的问题,请采纳!如果没有,请私信联系或评论您的疑惑

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(5条)

报告相同问题?

问题事件

  • 系统已结题 5月10日
  • 已采纳回答 5月2日
  • 创建了问题 5月1日

悬赏问题

  • ¥25 主成分分析中的第一第二主成分分别代表哪些参数
  • ¥15 oracle数据库查询语句问题
  • ¥15 有没有c++绘制算法的佬们吗救孩一下
  • ¥15 android 蓝牙闪退
  • ¥15 绝缘子污秽comsol仿真参数
  • ¥15 Fatal error in Process MEMORY
  • ¥15 labelme生成的json有乱码?
  • ¥30 arduino vector defined in discarded section `.text' of wiring.c.o (symbol from plugin)
  • ¥20 如何训练大模型在复杂因素组成的系统中求得最优解
  • ¥15 关于#r语言#的问题:在进行倾向性评分匹配时,使用“match it"包提示”错误于eval(family$initialize): y值必需满足0 <= y <= 1“请问在进行PSM时