SilyaSophie 2024-07-28 19:28 采纳率: 46.2%
浏览 8

python编程vae实现

问题:VAE代码运行报错,使用的是CICIoT2023数据集中的部分数据。

import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Lambda,Conv1D,Flatten
from tensorflow.keras.models import Model,Sequential
from tensorflow.keras import backend as K
from tensorflow.keras.datasets import mnist
from sklearn.preprocessing import MinMaxScaler, StandardScaler

import numpy as np
import pandas as pd
import time

# Load the dataset
# csv文件路径
csv_path_train = 'CICIoT2023/CICIoT2023/benign.csv'
# 读取数据
X_train = pd.read_csv(csv_path_train)
X_train = X_train.values
X_train = np.nan_to_num(MinMaxScaler().fit_transform(StandardScaler().fit_transform(X_train)))

X_train = np.reshape(X_train, (-1, 100, 46))
print(f"train:{X_train.shape}")
idx = np.random.randint(0, X_train.shape[0], 16)
imgs = X_train[idx]
# print(imgs.shape)
print(f"imgs:{imgs.shape}")
# noise = np.random.normal(0, 1, (16, 100, 1))
# # print(noise.shape)
# print(f"noise:{noise.shape}")

# csv文件路径
csv_path_test = 'CICIoT2023/CICIoT2023/ceshi.csv'
Y_test = pd.read_csv(csv_path_test)
Y_test_normal = Y_test[Y_test.label == 'BenignTraffic'].drop(labels='label', axis=1).values
Y_test_normal = np.nan_to_num(MinMaxScaler().fit_transform(StandardScaler().fit_transform(Y_test_normal)))
Y_test_abnormal = Y_test[Y_test.label != 'BenignTraffic'].drop(labels='label', axis=1).values
Y_test_abnormal = np.nan_to_num(MinMaxScaler().fit_transform(StandardScaler().fit_transform(Y_test_abnormal)))
Y_test_normal = np.reshape(Y_test_normal, (-1, 100, 46))
Y_test_abnormal = np.reshape(Y_test_abnormal, (-1, 100, 46))

# Define VAE architecture
#original_dim = X_train.shape[1]
#latent_dim = 2
batch_size=100
original_dim=784
latent_dim=2
intermediate_dim = 256
epochs=50

def sampling(args):
    z_mean, z_log_var = args
    batch = K.shape(z_mean)[0]
    dim = K.int_shape(z_mean)[1]
    epsilon = K.random_normal(shape=(batch, dim,dim))
    epsilon_reshaped = tf.reshape(epsilon, [-1, 392, 512])
    return z_mean + K.exp(0.5 * z_log_var) * epsilon_reshaped

inputs = Input(shape=(original_dim,100))
h = Conv1D(1024, kernel_size=3, strides=2, padding='same', activation='relu')(inputs)
z_mean=Dense(512, activation='relu')(h)
z_log_var=Dense(512, activation='relu')(h)
z = Lambda(sampling)([z_mean, z_log_var])

encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')

latent_inputs = Input(shape=(latent_dim,))
x = Dense(intermediate_dim, activation='relu')(latent_inputs)
outputs = Dense(original_dim, activation='sigmoid')(x)
#decoder = Model(latent_inputs, outputs, name='decoder')

def build_decoder():
    model = Sequential()
    model.add(latent_inputs)
    model.add(Dense(784, activation='sigmoid'))  # 添加一个全连接层,输出维度为 784
    # model.add(outputs)
    model.add(Flatten())
    model.add(Dense(2, activation='softmax'))
    model.summary()
    return model

#decoder部分,两层全连接层,x_decoded_mean为重构的输出
decoder_h = Dense(intermediate_dim, activation='relu')
decoder_mean = Dense(original_dim, activation='sigmoid')
h_decoded = decoder_h(z)
x_decoded_mean = decoder_mean(h_decoded)

decoder=build_decoder()

print(encoder(inputs))

outputs = decoder(encoder(inputs)[0])

vae = Model(inputs, outputs, name='vae')

# Define VAE loss
reconstruction_loss = tf.keras.losses.binary_crossentropy(inputs, outputs)
reconstruction_loss *= original_dim
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
vae_loss = K.mean(reconstruction_loss + kl_loss)
vae.add_loss(vae_loss)
vae.compile(optimizer='adam')

# Train VAE
vae.fit(X_train, epochs=10, batch_size=32, validation_data=(Y_test))

打印信息和报错内容如下:

Traceback (most recent call last):
  File "E:/PythonProject/test02/VAE-TCN.py", line 91, in <module>
    outputs = decoder(encoder(inputs)[0])
  File "D:\Software\Anaconda\envs\test02\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 737, in __call__
    self.name)
  File "D:\Software\Anaconda\envs\test02\lib\site-packages\tensorflow_core\python\keras\engine\input_spec.py", line 213, in assert_input_compatibility
    ' but received input with shape ' + str(shape))
ValueError: Input 0 of layer sequential is incompatible with the layer: expected axis -1 of input shape to have value 2 but received input with shape [None, 392, 512]

进程已结束,退出代码 1

img

img

img

请各位指点一下,谢谢!

  • 写回答

1条回答 默认 最新

  • 何事_ 2024-07-28 21:26
    关注

    引自免费微信小程序:皆我百晓生

    从你给出的错误信息来看,问题出在VAE模型的编码器和解码器的输出与输入之间的维度不匹配。具体来说,你的解码器期望的输入形状是 [None, 784],而编码器的输出形状却是 [None, 392, 512]。这是因为你定义的模型架构中的编码器和解码器部分输出的维度不一致。解决这个问题的方法是确保编码器和解码器的输出维度匹配。让我们来分析一下并解决这个问题。

    首先,我们需要理解VAE的基本结构。编码器将输入数据编码为一个较低维度的表示(通常是潜在空间),然后解码器从这个表示重建原始数据。在你的代码中,编码器输出的维度应该是原始数据的维度(这里是784),解码器的输入也应该是这个维度。但是你的代码中似乎混淆了这一点。让我们来纠正它。

    以下是修正后的代码片段:

    定义编码器部分:

    inputs = Input(shape=(original_dim,))  # 这里应该是原始数据的维度,即784
    h = Conv1D(...)  # 你的卷积层定义
    z_mean = Dense(...)  # 均值输出层,输出的维度应与潜在空间的维度匹配
    z_log_var = Dense(...)  # 对数方差输出层,同上
    z = Lambda(sampling)([z_mean, z_log_var])  # 采样层,生成潜在空间的样本
    encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')
    

    定义解码器部分:

    你提供的代码片段中没有完整的解码器定义,但从错误信息来看,你需要确保解码器的输入维度与编码器的输出维度一致。假设你的潜在空间维度是2(latent_dim=2),那么解码器的输入应该是这个维度。解码器的输出应该与原始数据的维度匹配,即784。你可以这样定义解码器:

    latent_inputs = Input(shape=(latent_dim,))  # 输入潜在空间的表示
    x = Dense(intermediate_dim, activation='relu')(latent_inputs)  # 全连接层,输出维度为中间维度
    outputs = Dense(original_dim, activation='sigmoid')(x)  # 输出层,输出维度为原始数据的维度(这里是784)
    decoder = Model(latent_inputs, outputs, name='decoder')
    

    然后你可以使用修正后的编码器和解码器来构建完整的VAE模型:

    vae = Model(inputs, decoder(encoder(inputs)[0]))  # 这里使用修正后的编码器和解码器
    

    希望这能帮助你解决问题!如果还有任何问题,请随时向我询问。

    评论 编辑记录

报告相同问题?

问题事件

  • 创建了问题 7月28日

悬赏问题

  • ¥15 35114 SVAC视频验签的问题
  • ¥15 impedancepy
  • ¥15 在虚拟机环境下完成以下,要求截图!
  • ¥15 求往届大挑得奖作品(ppt…)
  • ¥15 如何在vue.config.js中读取到public文件夹下window.APP_CONFIG.API_BASE_URL的值
  • ¥50 浦育平台scratch图形化编程
  • ¥20 求这个的原理图 只要原理图
  • ¥15 vue2项目中,如何配置环境,可以在打完包之后修改请求的服务器地址
  • ¥20 微信的店铺小程序如何修改背景图
  • ¥15 UE5.1局部变量对蓝图不可见