^孤独的大白^ 2021-12-28 22:22 采纳率: 0%
浏览 832
已结题

Python调用Keras中的ConvLSTM2D搭建CNN-ConvLSTM多层堆叠模型做预测,准确率近乎不变,想提升准确率

模型用于降水预测,输入了1979年至2016年全年日降水数据累计值,输入量为10000,测试集量为3868,输入输出均为降水。输入的维度原是三维:时间维度、纬度和经度,然后在第二维增加了步长,最后一维增加通道1,为了输入convlstm2d而加的。训练时训练集和验证集的acc都不动,一直在0.11左右,但二者的mse、mae都在下降。
目标是希望acc上升,使得预测值和实测值更加接近,好为后续的测试集预测做准备。
以下是代码:

import numpy as np
import keras
import tensorflow as tf
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
from keras.models import Sequential
from keras.layers.convolutional import Conv3D
from keras.layers.convolutional_recurrent import ConvLSTM2D
from keras.layers.normalization import BatchNormalization
from keras.layers import  Dropout,MaxPooling3D,UpSampling3D
from keras.callbacks import ModelCheckpoint

def normalize_0_1(precipitation):
    '''normalize based on each station data'''
    normal_precipitation = []
    mins_maxs = []
    precipitationMax = precipitation.max()
    precipitationMin = precipitation.min()
    normal_precipitation = (precipitation - precipitationMin) / (precipitationMax-precipitationMin)
    mins_maxs = [precipitationMin, precipitationMax]
    return np.array(normal_precipitation), mins_maxs

def denormalize(precipitation, precipitation_mins_maxs):
    res = precipitation *(precipitation_mins_maxs[1]-precipitation_mins_maxs[0]) + precipitation_mins_maxs[0]
    return res

pre_path = r'/pre79_16_ecmwf_60_120.npy'
pre = np.load(pre_path)
pre, pre_min_max = normalize_0_1(pre)

hight = 60
width = 120
train_num = 10000
test_num = 3868
timestep = 6     

x_train = np.zeros((train_num, timestep, hight, width, 1))
y_train = np.zeros((train_num, timestep, hight, width, 1))
x_test  = np.zeros((test_num, timestep, hight, width, 1))
y_test  = np.zeros((test_num, timestep, hight, width, 1))

for i in range(timestep):
    print(i)
    x_train[:, i, :, :, 0] = pre[i : i + train_num]
    y_train[:, i, :, :, 0] = pre[i +6 : i + train_num +6]
    x_test [:, i, :, :, 0] = pre[train_num + i : train_num + i + test_num]
    y_test [:, i, :, :, 0] = pre[train_num + i + 6 : train_num + i + test_num + 6]

del pre

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    try:
        for gpu in gpus:
            tf.config.experimental.set_memory_growth(gpu, True)
        logical_gpus = tf.config.experimental.list_logical_devices('GPU')
        print(len(gpus), "Physical GPUS,", len(logical_gpus), "Logical GPUs")
    except RuntimeError as e:
        print(e)

model = Sequential()     
model.add(Conv3D(filters=32, kernel_size=(3, 3, 3), padding='same', data_format='channels_last'))    
model.add(Dropout(0.2))  
model.add(Conv3D(filters=16, kernel_size=(3, 3, 3), padding='same', data_format='channels_last'))   
model.add(Dropout(0.2)) 
model.add(Conv3D(filters=1, kernel_size=(3, 3, 3), padding='same', data_format='channels_last'))   
model.add(BatchNormalization())   
model.add(MaxPooling3D(pool_size=(2, 2, 2), strides=None, padding='same', data_format='channels_last'))  

model.add(ConvLSTM2D(filters=64, kernel_size=(5, 5), padding='same', return_sequences=True))  
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=64, kernel_size=(5, 5), padding='same', return_sequences=True))  
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=64, kernel_size=(5, 5), padding='same', return_sequences=True))  
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=64, kernel_size=(5, 5), padding='same', return_sequences=True))  
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=64, kernel_size=(5, 5), padding='same', return_sequences=True))  
model.add(BatchNormalization())
model.add(ConvLSTM2D(filters=32, kernel_size=(5, 5), padding='same', return_sequences=True))  
model.add(BatchNormalization())


model.add(UpSampling3D(size=(2, 2, 2), data_format='channels_last'))      
model.add(Conv3D(filters=32, kernel_size=(3, 3, 3), padding='same', data_format='channels_last'))    
model.add(Dropout(0.3))  
model.add(Conv3D(filters=16, kernel_size=(3, 3, 3), padding='same', data_format='channels_last'))     
model.add(Dropout(0.3)) 
model.add(Conv3D(filters=3, kernel_size=(3, 3, 3), padding='same', data_format='channels_last'))     
model.add(BatchNormalization())
model.add(Conv3D(filters=1, kernel_size=(3, 3, 3), padding='same', data_format='channels_last'))      

model.compile(loss='mse', optimizer=keras.optimizers.Adam(lr=0.00005),metrics=["mae","mse","acc"])       
model.build((None, None, 60, 120, 1))             
print(model.summary())                             


filepath = "/weights.{epoch:03d}-{loss:.4f}-{val_loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', save_best_only=False, mode='min', period = 1)    
history = model.fit(x_train, y_train, batch_size=16, epochs=50, validation_split=0.05, callbacks=[checkpoint], shuffle=True)

import pickle
fr = open('/history_epoch50.pkl', 'rb')
history = pickle.load(fr)  
model.save('model.h5')

运行过程中在之前不加学习率的时候是这样的:

594/594 [==============================] - 465s 783ms/step - loss: 685.1493 - mae: 0.0484 - mse: 0.0098 - acc: 0.1179 - val_loss: 17.0912 - val_mae: 0.0089 - val_mse: 2.4368e-04 - val_acc: 0.1203
Epoch 2/50
594/594 [==============================] - 479s 807ms/step - loss: 72.4784 - mae: 0.0199 - mse: 0.0010 - acc: 0.1182 - val_loss: 15.8260 - val_mae: 0.0081 - val_mse: 2.2564e-04 - val_acc: 0.1203
Epoch 3/50
594/594 [==============================] - 483s 813ms/step - loss: 43.4806 - mae: 0.0153 - mse: 6.1992e-04 - acc: 0.1182 - val_loss: 15.5763 - val_mae: 0.0078 - val_mse: 2.2208e-04 - val_acc: 0.1203
Epoch 4/50
594/594 [==============================] - 479s 806ms/step - loss: 32.5449 - mae: 0.0132 - mse: 4.6401e-04 - acc: 0.1182 - val_loss: 15.2853 - val_mae: 0.0079 - val_mse: 2.1793e-04 - val_acc: 0.1203
Epoch 5/50
594/594 [==============================] - 478s 804ms/step - loss: 26.5099 - mae: 0.0118 - mse: 3.7796e-04 - acc: 0.1182 - val_loss: 14.8555 - val_mae: 0.0077 - val_mse: 2.1180e-04 - val_acc: 0.1203
Epoch 6/50
594/594 [==============================] - 479s 806ms/step - loss: 22.6157 - mae: 0.0108 - mse: 3.2244e-04 - acc: 0.1182 - val_loss: 14.4719 - val_mae: 0.0076 - val_mse: 2.0633e-04 - val_acc: 0.1203
Epoch 7/50
594/594 [==============================] - 479s 807ms/step - loss: 19.8603 - mae: 0.0100 - mse: 2.8316e-04 - acc: 0.1182 - val_loss: 14.1142 - val_mae: 0.0075 - val_mse: 2.0123e-04 - val_acc: 0.1203
Epoch 8/50
594/594 [==============================] - 479s 807ms/step - loss: 17.8966 - mae: 0.0094 - mse: 2.5516e-04 - acc: 0.1182 - val_loss: 14.0269 - val_mae: 0.0075 - val_mse: 1.9999e-04 - val_acc: 0.1203
Epoch 9/50
594/594 [==============================] - 480s 807ms/step - loss: 16.4687 - mae: 0.0089 - mse: 2.3480e-04 - acc: 0.1182 - val_loss: 13.6428 - val_mae: 0.0072 - val_mse: 1.9451e-04 - val_acc: 0.1203
Epoch 10/50
594/594 [==============================] - 480s 808ms/step - loss: 15.4713 - mae: 0.0086 - mse: 2.2058e-04 - acc: 0.1182 - val_loss: 13.7580 - val_mae: 0.0075 - val_mse: 1.9615e-04 - val_acc: 0.1203
Epoch 11/50
594/594 [==============================] - 481s 809ms/step - loss: 14.8372 - mae: 0.0083 - mse: 2.1154e-04 - acc: 0.1182 - val_loss: 13.3675 - val_mae: 0.0072 - val_mse: 1.9059e-04 - val_acc: 0.1203
Epoch 12/50
594/594 [==============================] - 480s 808ms/step - loss: 14.2812 - mae: 0.0080 - mse: 2.0361e-04 - acc: 0.1182 - val_loss: 13.4506 - val_mae: 0.0073 - val_mse: 1.9177e-04 - val_acc: 0.1203
Epoch 13/50
594/594 [==============================] - 482s 812ms/step - loss: 13.9967 - mae: 0.0079 - mse: 1.9956e-04 - acc: 0.1182 - val_loss: 13.3309 - val_mae: 0.0071 - val_mse: 1.9007e-04 - val_acc: 0.1203
Epoch 14/50
594/594 [==============================] - 482s 811ms/step - loss: 13.7858 - mae: 0.0078 - mse: 1.9655e-04 - acc: 0.1182 - val_loss: 13.0916 - val_mae: 0.0070 - val_mse: 1.8665e-04 - val_acc: 0.1203
Epoch 15/50
594/594 [==============================] - 482s 812ms/step - loss: 13.5475 - mae: 0.0077 - mse: 1.9315e-04 - acc: 0.1182 - val_loss: 13.0232 - val_mae: 0.0072 - val_mse: 1.8568e-04 - val_acc: 0.1203
Epoch 16/50
594/594 [==============================] - 483s 814ms/step - loss: 13.3415 - mae: 0.0076 - mse: 1.9022e-04 - acc: 0.1182 - val_loss: 13.0282 - val_mae: 0.0071 - val_mse: 1.8575e-04 - val_acc: 0.1203
Epoch 17/50
594/594 [==============================] - 477s 803ms/step - loss: 13.2199 - mae: 0.0075 - mse: 1.8848e-04 - acc: 0.1182 - val_loss: 13.1452 - val_mae: 0.0072 - val_mse: 1.8742e-04 - val_acc: 0.1203
Epoch 18/50
594/594 [==============================] - 477s 803ms/step - loss: 13.0669 - mae: 0.0075 - mse: 1.8630e-04 - acc: 0.1182 - val_loss: 14.1722 - val_mae: 0.0083 - val_mse: 2.0206e-04 - val_acc: 0.1203
Epoch 19/50
594/594 [==============================] - 477s 803ms/step - loss: 12.9365 - mae: 0.0074 - mse: 1.8444e-04 - acc: 0.1182 - val_loss: 12.9311 - val_mae: 0.0071 - val_mse: 1.8437e-04 - val_acc: 0.1203
Epoch 20/50
594/594 [==============================] - 488s 821ms/step - loss: 12.8607 - mae: 0.0074 - mse: 1.8336e-04 - acc: 0.1182 - val_loss: 13.6763 - val_mae: 0.0075 - val_mse: 1.9499e-04 - val_acc: 0.1203

这里补充一下损失函数用的是自定义函数,所以loss较大,但换成mse效果也是这样,不是损失函数导致的。acc感觉实在是不变,加了学习率lr=0.00005后感觉还是不行,就没等下去。

Epoch 1/50
594/594 [==============================] - 487s 820ms/step - loss: 4703.4019 - mae: 0.1567 - mse: 0.0671 - acc: 0.1151 - val_loss: 43.4248 - val_mae: 0.0178 - val_mse: 6.1913e-04 - val_acc: 0.1203
Epoch 2/50
594/594 [==============================] - 493s 830ms/step - loss: 1084.3707 - mae: 0.0805 - mse: 0.0155 - acc: 0.1179 - val_loss: 26.5058 - val_mae: 0.0123 - val_mse: 3.7791e-04 - val_acc: 0.1203
Epoch 3/50
594/594 [==============================] - 497s 836ms/step - loss: 611.0968 - mae: 0.0603 - mse: 0.0087 - acc: 0.1181 - val_loss: 22.6246 - val_mae: 0.0109 - val_mse: 3.2257e-04 - val_acc: 0.1203
Epoch 4/50
594/594 [==============================] - 493s 830ms/step - loss: 392.6740 - mae: 0.0481 - mse: 0.0056 - acc: 0.1182 - val_loss: 20.9750 - val_mae: 0.0102 - val_mse: 2.9905e-04 - val_acc: 0.1203

期待大家的解答,感谢大家O(∩_∩)O

  • 写回答

1条回答 默认 最新

  • 双圣树下的阿尔达 2021-12-29 11:45
    关注

    acc其实是准确度,如果你不是二分类问题的话,不建议使用acc。直接使用mse,mae这种适用于回归问题的损失函数就可以了。

    评论

报告相同问题?

问题事件

  • 已结题 (查看结题原因) 10月6日
  • 创建了问题 12月28日

悬赏问题

  • ¥30 这是哪个作者做的宝宝起名网站
  • ¥60 版本过低apk如何修改可以兼容新的安卓系统
  • ¥25 由IPR导致的DRIVER_POWER_STATE_FAILURE蓝屏
  • ¥50 有数据,怎么建立模型求影响全要素生产率的因素
  • ¥50 有数据,怎么用matlab求全要素生产率
  • ¥15 TI的insta-spin例程
  • ¥15 完成下列问题完成下列问题
  • ¥15 C#算法问题, 不知道怎么处理这个数据的转换
  • ¥15 YoloV5 第三方库的版本对照问题
  • ¥15 请完成下列相关问题!