万里长征第一步 2023-11-28 11:31 采纳率: 0%
浏览 18
已结题

​测试集准确率特别低

​测试集准确率特别低,为什么呢,帮忙看看。
1、数据转图片(利用格拉姆角场,数据是肌电信号,有12个传感器,所以数据变成(300,300,12)格式。

imageData=[]
imageLabel=[]#空数据集
imageLength=300#窗口长
classes = 49
method = 'summation'
n_sample, n_channels = scaled_X.shape
img_sz =300#图片大小

for i in range(classes):
    index = [];
    for j in range(label_ex.shape[0]):
        if(label_ex[j,:]==i):
            index.append(j)
            
    iemg = scaled_X[index,:]
    length = math.floor((iemg.shape[0]-imageLength)/imageLength)
    #print("class ",i," number of sample: ",iemg.shape[0],length)
    #length为每个标签的窗口数
    for j in range(length):
        subImage = iemg[imageLength*j:imageLength*(j+1),:]#每一个小窗口(300*12)
       
        gaf = GramianAngularField(image_size=img_sz,method=method)#定义格拉姆角场
        gaf_images = gaf.fit_transform(subImage.T)#窗口转格拉姆角场图片
        #print(gaf_images.shape)
        #gaf_images = gaf_images / 255.0
        #print("gaf_images的形状是:",gaf_images.shape)
        connectdata=[]
    #gaf_images_convert=gaf_images.convert("LA")
        for c in range(n_channels):        
            gaf_img = gaf_images[c, :, :]  #得到每个通道的图片/12
            #print("gaf_img的形状是:",gaf_img.shape)
        #下面处理hog的特征#处理每个通道的图片/12
            fd, hog_img = hog(gaf_img,
                    orientations=8,
                    pixels_per_cell=(16, 16),
                    cells_per_block=(1, 1),
                    visualize=True,
                    multichannel=False)#多通道模式
            connectdata.append(hog_img)#每个通道存放处
        connectdata=np.stack(connectdata)#图片进行堆叠
        connectdatas=np.array(connectdata)
        print("connectdatas的形状是:",connectdatas.shape)


        imageData.append(connectdatas)
        imageLabel.append(i)
imageData=np.array(imageData)
connectdatas=np.transpose(imageData, (0,2, 3, 1))
imageLabel=np.array(imageLabel)
print(connectdatas.shape)
print(imageLabel.shape)
  

2、之后划分数据集和对标签进行升维。


mport h5py
import numpy as np
import tensorflow as tf 
import keras
from keras.layers import Input, Dense, ZeroPadding2D, Dropout, Activation, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D
from keras.models import Model
import matplotlib.pyplot as plt
%matplotlib inline

def convert_to_one_hot(Y, C):
    Y = np.eye(C)[Y.reshape(-1)].T
    return Y


imageData   = connectdatas
imageLabel  = imageLabel


# 随机打乱数据和标签
N = imageData.shape[0]
index = np.random.permutation(N)
data  = imageData[index,:,:]
label = imageLabel[index]

# 对数据升维,标签one-hot
#data  = np.expand_dims(data, axis=3)
label = convert_to_one_hot(label,49).T

# 划分数据集
N = data.shape[0]
num_train = round(N*0.8)
X_train = data[0:num_train,:,:,:]
Y_train = label[0:num_train,:]
X_test  = data[num_train:N,:,:,:]
Y_test  = label[num_train:N,:]

print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))#

3、建立模型进行分类

import tensorflow as tf
import tensorflow as tf
from tensorflow  import keras
from keras import layers,models
from keras.models import load_model
 
# 定义Inceptionv3网络模型
def Inceptionv3():
    input_tensor = layers.Input(shape=(300, 300, 12))
    
    x = layers.Conv2D(32, (3, 3), strides=(2, 2), padding='valid', activation='relu')(input_tensor)
    x = layers.BatchNormalization()(x)
    x = layers.Conv2D(32, (3, 3), strides=(1, 1), padding='valid', activation='relu')(x)
    x = layers.BatchNormalization()(x)
    x = layers.Conv2D(64, (3, 3), strides=(1, 1), padding='same', activation='relu')(x)
    x = layers.BatchNormalization()(x)
    x = layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(x)
 
    x = layers.Conv2D(80, (1, 1), strides=(1, 1), padding='valid', activation='relu')(x)
    x = layers.BatchNormalization()(x)
    x = layers.Conv2D(192, (3, 3), strides=(1, 1), padding='valid', activation='relu')(x)
    x = layers.BatchNormalization()(x)
    x = layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(x)
 
    # Inception模块
    x = inception_module(x, [64, 96, 128, 16, 32, 32])
    x = inception_module(x, [128, 128, 192, 32, 96, 64])
 
    x = layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(x)
 
    # Inception模块
    x = inception_module(x, [192, 96, 208, 16, 48, 64])
    x = inception_module(x, [160, 112, 224, 24, 64, 64])
    x = inception_module(x, [128, 128, 256, 24, 64, 64])
    x = inception_module(x, [112, 144, 288, 32, 64, 64])
    x = inception_module(x, [256, 160, 320, 32, 128, 128])
 
    x = layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2))(x)
 
    # Inception模块
    x = inception_module(x, [256, 160, 320, 32, 128, 128])
    x = inception_module(x, [384, 192, 384, 48, 128, 128])
 
    x = layers.GlobalAveragePooling2D()(x)
    output_tensor = layers.Dense(49, activation='softmax')(x)
 
    model = models.Model(inputs=input_tensor, outputs=output_tensor)
    
    return model
 
# 定义Inception模块
def inception_module(x, filters):
    branch1x1 = layers.Conv2D(filters[0], (1, 1), strides=(1, 1), padding='same', activation='relu')(x)
 
    branch3x3 = layers.Conv2D(filters[1], (1, 1), strides=(1, 1), padding='same', activation='relu')(x)
    branch3x3 = layers.Conv2D(filters[2], (3, 3), strides=(1, 1), padding='same', activation='relu')(branch3x3)
 
    branch5x5 = layers.Conv2D(filters[3], (1, 1), strides=(1, 1), padding='same', activation='relu')(x)
    branch5x5 = layers.Conv2D(filters[4], (5, 5), strides=(1, 1), padding='same', activation='relu')(branch5x5)
 
    branch_pool = layers.MaxPooling2D(pool_size=(3, 3), strides=(1, 1), padding='same')(x)
    branch_pool =layers.Conv2D(filters[5], (1, 1), strides=(1, 1), padding='same', activation='relu')(branch_pool)
 
    output = layers.concatenate([branch1x1, branch3x3, branch5x5, branch_pool], axis=-1)
    
    return output
 

 
# 构建Inceptionv3模型
model = Inceptionv3()
model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
 
# 训练模型
model.fit(X_train, Y_train, batch_size=128, epochs=50, verbose=1)
 
# 评估模型
test_loss, test_acc = model.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', test_loss)
print('Test accuracy:', test_acc)

4、结果所示

Epoch 1/50
3/3 [==============================] - 30s 8s/step - loss: 4.7497 - accuracy: 0.0116
Epoch 2/50
3/3 [==============================] - 25s 8s/step - loss: 3.8962 - accuracy: 0.0260
Epoch 3/50
3/3 [==============================] - 25s 8s/step - loss: 3.8852 - accuracy: 0.0318
Epoch 4/50
3/3 [==============================] - 25s 8s/step - loss: 3.8771 - accuracy: 0.0260
Epoch 5/50
3/3 [==============================] - 25s 8s/step - loss: 3.8285 - accuracy: 0.0289
Epoch 6/50
3/3 [==============================] - 25s 8s/step - loss: 3.7667 - accuracy: 0.0289
Epoch 7/50
3/3 [==============================] - 24s 8s/step - loss: 3.7337 - accuracy: 0.0405
Epoch 8/50
3/3 [==============================] - 24s 8s/step - loss: 3.6868 - accuracy: 0.0405
Epoch 9/50
3/3 [==============================] - 24s 8s/step - loss: 3.6023 - accuracy: 0.0520
Epoch 10/50
3/3 [==============================] - 25s 8s/step - loss: 3.5208 - accuracy: 0.0347
Epoch 11/50
3/3 [==============================] - 25s 8s/step - loss: 3.4885 - accuracy: 0.0434
Epoch 12/50
3/3 [==============================] - 25s 8s/step - loss: 3.4126 - accuracy: 0.0578
Epoch 13/50
3/3 [==============================] - 26s 8s/step - loss: 3.4616 - accuracy: 0.0405
Epoch 14/50
3/3 [==============================] - 26s 8s/step - loss: 3.3438 - accuracy: 0.0520
Epoch 15/50
3/3 [==============================] - 25s 8s/step - loss: 3.3497 - accuracy: 0.0578
Epoch 16/50
3/3 [==============================] - 25s 8s/step - loss: 3.1960 - accuracy: 0.0867
Epoch 17/50
3/3 [==============================] - 24s 8s/step - loss: 3.1662 - accuracy: 0.0838
Epoch 18/50
3/3 [==============================] - 24s 8s/step - loss: 3.0967 - accuracy: 0.0694
Epoch 19/50
3/3 [==============================] - 26s 8s/step - loss: 3.0880 - accuracy: 0.0751
Epoch 20/50
3/3 [==============================] - 25s 8s/step - loss: 3.0908 - accuracy: 0.0751
Epoch 21/50
3/3 [==============================] - 24s 8s/step - loss: 3.1432 - accuracy: 0.0925
Epoch 22/50
3/3 [==============================] - 24s 8s/step - loss: 3.0984 - accuracy: 0.0954
Epoch 23/50
3/3 [==============================] - 25s 8s/step - loss: 2.9826 - accuracy: 0.1185
Epoch 24/50
3/3 [==============================] - 25s 8s/step - loss: 2.9005 - accuracy: 0.1127
Epoch 25/50
3/3 [==============================] - 26s 8s/step - loss: 2.7154 - accuracy: 0.1561
Epoch 26/50
3/3 [==============================] - 25s 8s/step - loss: 2.7371 - accuracy: 0.1503
Epoch 27/50
3/3 [==============================] - 25s 8s/step - loss: 2.7225 - accuracy: 0.1214
Epoch 28/50
3/3 [==============================] - 27s 9s/step - loss: 2.5903 - accuracy: 0.1792
Epoch 29/50
3/3 [==============================] - 26s 8s/step - loss: 2.4250 - accuracy: 0.1994
Epoch 30/50
3/3 [==============================] - 25s 8s/step - loss: 2.2562 - accuracy: 0.2486
Epoch 31/50
3/3 [==============================] - 24s 8s/step - loss: 2.2740 - accuracy: 0.2254
Epoch 32/50
3/3 [==============================] - 24s 8s/step - loss: 2.5253 - accuracy: 0.1936
Epoch 33/50
3/3 [==============================] - 24s 8s/step - loss: 2.4184 - accuracy: 0.1676
Epoch 34/50
3/3 [==============================] - 24s 8s/step - loss: 2.3348 - accuracy: 0.2312
Epoch 35/50
3/3 [==============================] - 24s 8s/step - loss: 2.3162 - accuracy: 0.2543
Epoch 36/50
3/3 [==============================] - 24s 8s/step - loss: 2.1927 - accuracy: 0.2688
Epoch 37/50
3/3 [==============================] - 24s 8s/step - loss: 2.0228 - accuracy: 0.2775
Epoch 38/50
3/3 [==============================] - 24s 8s/step - loss: 2.0253 - accuracy: 0.3439
Epoch 39/50
3/3 [==============================] - 24s 8s/step - loss: 2.0712 - accuracy: 0.2746
Epoch 40/50
3/3 [==============================] - 24s 8s/step - loss: 2.0722 - accuracy: 0.2803
Epoch 41/50
3/3 [==============================] - 24s 8s/step - loss: 1.9843 - accuracy: 0.3208
Epoch 42/50
3/3 [==============================] - 24s 8s/step - loss: 1.9570 - accuracy: 0.3121
Epoch 43/50
3/3 [==============================] - 24s 8s/step - loss: 1.9189 - accuracy: 0.3526
Epoch 44/50
3/3 [==============================] - 24s 8s/step - loss: 1.7406 - accuracy: 0.3988
Epoch 45/50
3/3 [==============================] - 24s 8s/step - loss: 1.6777 - accuracy: 0.4075
Epoch 46/50
3/3 [==============================] - 24s 8s/step - loss: 1.4576 - accuracy: 0.4798
Epoch 47/50
3/3 [==============================] - 24s 7s/step - loss: 1.5254 - accuracy: 0.4653
Epoch 48/50
3/3 [==============================] - 24s 8s/step - loss: 1.9673 - accuracy: 0.3526
Epoch 49/50
3/3 [==============================] - 25s 8s/step - loss: 1.4825 - accuracy: 0.4913
Epoch 50/50
3/3 [==============================] - 24s 8s/step - loss: 1.4271 - accuracy: 0.5058
Test loss: 11.708770751953125
Test accuracy: 0.023255813866853714

  • 写回答

9条回答 默认 最新

  • CSDN-Ada助手 CSDN-AI 官方账号 2023-11-28 14:04
    关注

    【以下回答由 GPT 生成】

    在这个问题中,可能导致测试集准确率低的原因有几个可能性:

    1. 数据预处理问题:在转换数据为图像的过程中,可能存在一些预处理问题。例如,图像尺寸的选择可能不合适,或者转换方法的选择可能有误。这可能导致图像中的信息丢失或变形,从而影响模型的准确性。请确保选择合适的图像尺寸和转换方法,并检查数据的预处理步骤是否正确。

    2. 样本不平衡问题:训练集和测试集中的样本分布可能不平衡,即某些类别的样本数量远远超过其他类别。这可能导致模型对于数量较多的类别更加偏向,从而导致其他类别的准确率下降。请检查数据集的样本分布,并考虑采取平衡样本的方法,例如欠采样或过采样。

    3. 模型选择问题:使用的模型可能不适用于当前的图像数据。不同的数据类型和问题可能需要不同类型的模型来处理。请确保选择合适的模型结构和参数,并根据问题的特点进行调整。

    解决这些问题的一些可能方案包括:

    1. 调整图像尺寸和转换方法:尝试不同的图像尺寸和转换方法,并观察模型的准确率变化。可以尝试使用其他的图像转换方法来提取更多的特征信息。

    2. 平衡数据集:如果发现训练集和测试集存在样本不平衡问题,可以采取相应的平衡方法来调整样本数量。例如,可以使用欠采样或过采样的方法来平衡数据集。

    3. 尝试其他模型:如果当前使用的模型无法达到较高的准确率,请尝试使用其他模型。可以尝试使用更深层次的神经网络模型,或者尝试使用其他类型的模型,例如卷积神经网络(CNN)或循环神经网络(RNN),以适应图像数据的特点。

    另外,为了更好的帮助解决问题,可以提供更多的详细信息,例如模型的代码、训练和测试的代码,以及数据集的具体信息。这样可以更准确地分析问题所在并提供更具体的解决方案。



    【相关推荐】



    如果你已经解决了该问题, 非常希望你能够分享一下解决方案, 写成博客, 将相关链接放在评论区, 以帮助更多的人 ^-^
    评论

报告相同问题?

问题事件

  • 已结题 (查看结题原因) 12月3日
  • 赞助了问题酬金15元 11月28日
  • 创建了问题 11月28日

悬赏问题

  • ¥15 找人不需要人工智能回答的gamit解算后做形变分析
  • ¥20 RL+GNN解决人员排班问题时梯度消失
  • ¥15 统计大规模图中的完全子图问题
  • ¥15 使用LM2596制作降压电路,一个能运行,一个不能
  • ¥60 要数控稳压电源测试数据
  • ¥15 能帮我写下这个编程吗
  • ¥15 ikuai客户端l2tp协议链接报终止15信号和无法将p.p.p6转换为我的l2tp线路
  • ¥15 phython读取excel表格报错 ^7个 SyntaxError: invalid syntax 语句报错
  • ¥20 @microsoft/fetch-event-source 流式响应问题
  • ¥15 ogg dd trandata 报错