训练LPRNet时Acc一直不变,训练集大小为19w+,均为大小固定的车牌,不清楚为什么Acc一直提不上去
Epoch:31 || epochiter: 100/380|| Totel iter 11500 || Loss: 0.7277||Batch time: 0.0524 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 120/380|| Totel iter 11520 || Loss: 0.6873||Batch time: 0.0438 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 140/380|| Totel iter 11540 || Loss: 0.7170||Batch time: 0.7150 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 160/380|| Totel iter 11560 || Loss: 0.7471||Batch time: 0.0326 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 180/380|| Totel iter 11580 || Loss: 0.7248||Batch time: 0.5828 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 200/380|| Totel iter 11600 || Loss: 0.7089||Batch time: 0.0783 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 220/380|| Totel iter 11620 || Loss: 0.6404||Batch time: 0.5742 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 240/380|| Totel iter 11640 || Loss: 0.8387||Batch time: 0.0341 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 260/380|| Totel iter 11660 || Loss: 0.7818||Batch time: 0.4740 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 280/380|| Totel iter 11680 || Loss: 0.7460||Batch time: 0.0571 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 300/380|| Totel iter 11700 || Loss: 0.7408||Batch time: 0.4415 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 320/380|| Totel iter 11720 || Loss: 0.6865||Batch time: 0.0570 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 340/380|| Totel iter 11740 || Loss: 0.8336||Batch time: 0.2993 sec. ||LR: 0.01000000
Epoch:31 || epochiter: 360/380|| Totel iter 11760 || Loss: 0.8447||Batch time: 0.1269 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 0/380|| Totel iter 11780 || Loss: 0.7486||Batch time: 3.5372 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 20/380|| Totel iter 11800 || Loss: 0.7275||Batch time: 0.0326 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 40/380|| Totel iter 11820 || Loss: 0.7506||Batch time: 2.9425 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 60/380|| Totel iter 11840 || Loss: 0.7255||Batch time: 0.0319 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 80/380|| Totel iter 11860 || Loss: 0.7473||Batch time: 2.4504 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 100/380|| Totel iter 11880 || Loss: 0.7277||Batch time: 0.0316 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 120/380|| Totel iter 11900 || Loss: 0.6873||Batch time: 2.6045 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 140/380|| Totel iter 11920 || Loss: 0.7170||Batch time: 0.0319 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 160/380|| Totel iter 11940 || Loss: 0.7471||Batch time: 3.2460 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 180/380|| Totel iter 11960 || Loss: 0.7248||Batch time: 0.0314 sec. ||LR: 0.01000000
Epoch:32 || epochiter: 200/380|| Totel iter 11980 || Loss: 0.7089||Batch time: 2.8809 sec. ||LR: 0.01000000
[Info] Test Accuracy: 0.27200255102040816 [6824:10958:7306:25088]
Epoch:79 || epochiter: 360/380|| Totel iter 30000 || Loss: 0.8447||Batch time: 0.0585 sec. ||LR: 0.00010000
Epoch:80 || epochiter: 0/380|| Totel iter 30020 || Loss: 0.7486||Batch time: 3.3338 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 20/380|| Totel iter 30040 || Loss: 0.7275||Batch time: 0.0322 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 40/380|| Totel iter 30060 || Loss: 0.7506||Batch time: 2.8060 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 60/380|| Totel iter 30080 || Loss: 0.7255||Batch time: 0.0317 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 80/380|| Totel iter 30100 || Loss: 0.7473||Batch time: 3.6339 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 100/380|| Totel iter 30120 || Loss: 0.7277||Batch time: 0.0319 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 120/380|| Totel iter 30140 || Loss: 0.6873||Batch time: 4.0021 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 140/380|| Totel iter 30160 || Loss: 0.7170||Batch time: 0.0337 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 160/380|| Totel iter 30180 || Loss: 0.7471||Batch time: 3.4872 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 180/380|| Totel iter 30200 || Loss: 0.7248||Batch time: 0.1185 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 200/380|| Totel iter 30220 || Loss: 0.7089||Batch time: 4.2137 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 220/380|| Totel iter 30240 || Loss: 0.6404||Batch time: 0.0318 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 240/380|| Totel iter 30260 || Loss: 0.8387||Batch time: 3.2628 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 260/380|| Totel iter 30280 || Loss: 0.7818||Batch time: 0.0317 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 280/380|| Totel iter 30300 || Loss: 0.7460||Batch time: 6.3825 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 300/380|| Totel iter 30320 || Loss: 0.7408||Batch time: 0.0321 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 320/380|| Totel iter 30340 || Loss: 0.6865||Batch time: 4.0092 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 340/380|| Totel iter 30360 || Loss: 0.8336||Batch time: 0.0326 sec. ||LR: 0.00001000
Epoch:80 || epochiter: 360/380|| Totel iter 30380 || Loss: 0.8447||Batch time: 4.7633 sec. ||LR: 0.00001000
Epoch:81 || epochiter: 0/380|| Totel iter 30400 || Loss: 0.7486||Batch time: 4.7302 sec. ||LR: 0.00001000
Epoch:81 || epochiter: 20/380|| Totel iter 30420 || Loss: 0.7275||Batch time: 0.0322 sec. ||LR: 0.00001000
Epoch:81 || epochiter: 40/380|| Totel iter 30440 || Loss: 0.7506||Batch time: 5.1820 sec. ||LR: 0.00001000
Epoch:81 || epochiter: 60/380|| Totel iter 30460 || Loss: 0.7255||Batch time: 0.0336 sec. ||LR: 0.00001000
Epoch:81 || epochiter: 80/380|| Totel iter 30480 || Loss: 0.7473||Batch time: 3.0441 sec. ||LR: 0.00001000
[Info] Test Accuracy: 0.27200255102040816 [6824:10958:7306:25088]
[Info] Test Speed: 0.0011592100981020928s 1/25229]
以下是训练过程中的相关参数:
parser = argparse.ArgumentParser(description='parameters to train net')
parser.add_argument('--max_epoch', default=140, help='epoch to train the network')
parser.add_argument('--img_size', default=[94, 24], help='the image size')
parser.add_argument('--train_img_dirs', default="/mnt/LPRNet_Pytorch-master/rec_images/train", help='the train images path')
parser.add_argument('--test_img_dirs', default="/mnt/LPRNet_Pytorch-master/rec_images/test", help='the test images path')
parser.add_argument('--dropout_rate', default=0.5, help='dropout rate.')
parser.add_argument('--learning_rate', default=0.1, help='base value of learning rate.')
parser.add_argument('--lpr_max_len', default=8, help='license plate number max length.')
parser.add_argument('--train_batch_size', default=512, help='training batch size.')
parser.add_argument('--test_batch_size', default=512, help='testing batch size.')
parser.add_argument('--phase_train', default=True, type=bool, help='train or test phase flag.')
parser.add_argument('--num_workers', default=8, type=int, help='Number of workers used in dataloading')
parser.add_argument('--cuda', default=True, type=bool, help='Use cuda to train model')
parser.add_argument('--resume_epoch', default=0, type=int, help='resume iter for retraining')
parser.add_argument('--save_interval', default=500, type=int, help='interval for save model state dict')
parser.add_argument('--test_interval', default=500, type=int, help='interval for evaluate')
parser.add_argument('--momentum', default=0.9, type=float, help='momentum')
parser.add_argument('--weight_decay', default=2e-5, type=float, help='Weight decay for SGD')
parser.add_argument('--lr_schedule', default=[20,40,60,80,100,120], help='schedule for learning rate.')
parser.add_argument('--save_folder', default='/mnt/LPRNet_Pytorch-master//weights/', help='Location to save checkpoint models')
parser.add_argument('--pretrained_model', default='/mnt/LPRNet_Pytorch-master/weights/Final_LPRNet_model.pth', help='no pretrain')