使用keras搭的模型,训练时候,使用相同的loss和metrics,但是输出却不一样

keras搭的模型,训练时候,使用相同的loss和metrics,但是输出却不一样,为什么会出现这种情况呀

1个回答

这很正常,因为初始条件的不同,运算误差,所以训练结果具有一定的随机性

sunyuxiu
sunyuxiu 老哥可以详细解释一下吗,我还是不理解
大约 2 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Python+OpenCV计算机视觉

Python+OpenCV计算机视觉

怎样用keras实现从预训练模型中提取多层特征?

![图片说明](https://img-ask.csdn.net/upload/201906/19/1560958477_965287.jpg) 我想从一个预训练的卷积神经网络的不同层中提取特征,然后把这些不同层的特征拼接在一起,实现如上图一样的网络结构,我写的代码如下 ``` base_model = VGGFace(model='resnet50', include_top=False) model1 = base_model model2 = base_model input1 = Input(shape=(197,197,3)) model1_out = model1.layers[-12].output model1_in = model1.layers[0].output model1 = Model(model1_in,model1_out) x1 = model1(input1) x1 = GlobalMaxPool2D()(x1) x2 = model2(input1) x2 = GlobalMaxPool2D()(x2) out = Concatenate(axis=-1)([x1,x2]) out = Dense(1,activation='sigmoid')(out) model3 = Model([input1,input2],out) from keras.utils import plot_model plot_model(model3,"model3.png") import matplotlib.pyplot as plt img = plt.imread('model3.png') plt.imshow(img) ``` 但模型可视化显示如下,两个网络的权值并不共享。![图片说明](https://img-ask.csdn.net/upload/201906/19/1560959263_500375.png)

使用keras画出模型准确率评估的执行结果时出现:

建立好深度学习的模型后,使用反向传播法进行训练。 定义了训练方式: ``` model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy']) ``` 执行训练: ``` train_history =model.fit(x=x_Train_normalize, y=y_Train_OneHot,validation_split=0.2, epochs=10,batch_size=200,verbose=2) ``` 执行后出现: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571243584_952792.png) 建立show_train_history显示训练过程: ``` import matplotlib.pyplot as plt def show_train_history(train_history,train,validation): plt.plot(train_history.history[train]) plt.plot(train_history.history[validation]) plt.title('Train History') plt.ylabel(train) plt.xlabel('Epoch') plt.legend(['train','validation'],loc='upper left') plt.show() ``` 画出准确率执行结果: ``` show_train_history(train_history,'acc','val_acc') ``` 结果出现以下问题: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571243832_179270.png) 这是怎么回事呀? 求求大佬救救孩子555

keras多输出模型和多任务学习multi-task learning的关系

看多任务学习的资料,有一种机制是主要任务和辅助任务会相互帮助提高性能,那么keras的多输出模型属不属于这种多任务学习尼?还是只是单纯的相互独立的多类别学习而已?

基于keras写的模型中自定义的函数(如损失函数)如何保存到模型中?

```python batch_size = 128 original_dim = 100 #25*4 latent_dim = 16 # z的维度 intermediate_dim = 256 # 中间层的维度 nb_epoch = 50 # 训练轮数 epsilon_std = 1.0 # 重参数 #my tips:encoding x = Input(batch_shape=(batch_size,original_dim)) h = Dense(intermediate_dim, activation='relu')(x) z_mean = Dense(latent_dim)(h) # mu z_log_var = Dense(latent_dim)(h) # sigma #my tips:Gauss sampling,sample Z def sampling(args): # 重采样 z_mean, z_log_var = args epsilon = K.random_normal(shape=(128, 16), mean=0., stddev=1.0) return z_mean + K.exp(z_log_var / 2) * epsilon # note that "output_shape" isn't necessary with the TensorFlow backend # my tips:get sample z(encoded) z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var]) # we instantiate these layers separately so as to reuse them later decoder_h = Dense(intermediate_dim, activation='relu') # 中间层 decoder_mean = Dense(original_dim, activation='sigmoid') # 输出层 h_decoded = decoder_h(z) x_decoded_mean = decoder_mean(h_decoded) #my tips:loss(restruct X)+KL def vae_loss(x, x_decoded_mean): xent_loss = original_dim * objectives.binary_crossentropy(x, x_decoded_mean) kl_loss = - 0.5 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) return xent_loss + kl_loss vae = Model(x, x_decoded_mean) vae.compile(optimizer='rmsprop', loss=vae_loss) vae.fit(x_train, x_train, shuffle=True, epochs=nb_epoch, verbose=2, batch_size=batch_size, validation_data=(x_valid, x_valid)) vae.save(path+'//VAE.h5') ``` 一段搭建VAE结构的代码,在保存模型后调用时先是出现了sampling中一些全局变量未定义的问题,将变量改为确定数字后又出现了vae_loss函数未定义的问题(unknown loss function: vae_loss) 个人认为是模型中自定义的函数在保存上出现问题,但是也不知道怎么解决。刚刚上手keras和tensorflow这些框架,很多问题是第一次遇到,麻烦大神们帮帮忙!感谢!

急急急!使用keras训练BiLSTM时,经过几个epoch后,loss增大,acc降低是什么原因?

急急急!使用keras训练BiLSTM时,经过几个epoch后,loss增大,acc降低是什么原因?

keras如何为已经训练好的模型添加层?

已经训练好的model,比如想在后面再添加lstm或者全连接层应该怎么做呢?

关于Colab上Keras模型转TPU模型的问题

使用TPU加速训练,将Keras模型转TPU模型时报错,如图![图片说明](https://img-ask.csdn.net/upload/202001/14/1578998736_238721.png) 关键代码如下 引用库: ``` %tensorflow_version 1.x import json import os import numpy as np import tensorflow as tf from tensorflow.python.keras.applications import resnet from tensorflow.python.keras import callbacks from tensorflow.python.keras.preprocessing.image import ImageDataGenerator import matplotlib.pyplot as plt ``` 转换TPU模型代码如下 ``` # This address identifies the TPU we'll use when configuring TensorFlow. TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR'] tf.logging.set_verbosity(tf.logging.INFO) self.model = tf.contrib.tpu.keras_to_tpu_model(self.model, strategy=tf.contrib.tpu.TPUDistributionStrategy(tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER))) self.model = resnet50.ResNet50(weights=None, input_shape=dataset.input_shape, classes=num_classes) ```

关于keras 对模型进行训练 train_on_batch参数和模型输出的关系

在用keras+gym测试policy gradient进行小车杆平衡时模型搭建如下: ``` inputs = Input(shape=(4,),name='ob_inputs') x = Dense(16,activation='relu')(inputs) x = Dense(16,activation='relu')(x) x = Dense(1,activation='sigmoid')(x) model = Model(inputs=inputs,outputs = x) ``` 这里输出层是一个神经元,输出一个[0,1]之间的数,表示小车动作的概率 但是在代码训练过程中,模型的训练代码为: ``` X = np.array(states) y = np.array(list(zip(actions,discount_rewards))) loss = self.model.train_on_batch(X,y) ``` 这里的target data(y)是一个2维的列表数组,第一列是对应执行的动作,第二列是折扣奖励,那么在训练的时候,神经网络的输出数据和target data的维度不一致,是如何计算loss的呢?会自动去拟合y的第一列数据吗?

如何用keras对两组数据用相同的网络进行训练并且画在一个acc-loss图?

假如我有A,B两组数据,我想用两个的loss-acc图来对比得出哪组数据更好,所以如何将这两组数据同时进行训练并将结果画在一个acc-loss图?

树莓派4B python3.7训练keras模型失败?

1.我在树莓派4B用keras训练模型时一直失败。[图片说明](https://img-ask.csdn.net/upload/202005/22/1590078114_273231.jpg) 2.在树莓派3B 上python3.5.3训练就没问题! 3.这是我在网上找的一个叫 ms-agv-car-master 的图像识别包。这是地址:https://github.com/jerry73204/ms-agv-car ``` import os import glob import argparse import cv2 import numpy as np from keras.models import Model from keras.layers import Dense, Activation, MaxPool2D, Conv2D, Flatten, Dropout, Input, BatchNormalization, Add from keras.optimizers import Adam from keras.utils import multi_gpu_model, plot_model # Keras 內建模型 # https://keras.io/applications from keras.applications.vgg16 import VGG16 from keras.applications.vgg19 import VGG19 from keras.applications.resnet50 import ResNet50 from keras.applications.densenet import DenseNet121 from keras.applications.mobilenetv2 import MobileNetV2 def custom_model(input_shape, n_classes): def conv_block(x, filters): x = BatchNormalization() (x) x = Conv2D(filters, (3, 3), activation='relu', padding='same') (x) x = BatchNormalization() (x) shortcut = x x = Conv2D(filters, (3, 3), activation='relu', padding='same') (x) x = Add() ([x, shortcut]) x = MaxPool2D((2, 2), strides=(2, 2)) (x) return x input_tensor = Input(shape=input_shape) x = conv_block(input_tensor, 32) x = conv_block(x, 64) x = conv_block(x, 128) x = conv_block(x, 256) x = conv_block(x, 512) x = Flatten() (x) x = BatchNormalization() (x) x = Dense(512, activation='relu') (x) x = Dense(512, activation='relu') (x) output_layer = Dense(n_classes, activation='softmax') (x) inputs = [input_tensor] model = Model(inputs, output_layer) return model def main(): # 定義程式參數 arg_parser = argparse.ArgumentParser(description='模型訓練範例') arg_parser.add_argument( '--model-file', required=True, help='模型描述檔', ) arg_parser.add_argument( '--weights-file', required=True, help='模型參數檔案', ) arg_parser.add_argument( '--data-dir', required=True, help='資料目錄', ) arg_parser.add_argument( '--model-type', choices=('VGG16', 'VGG19', 'ResNet50', 'DenseNet121', 'MobileNetV2', 'custom'), default='custom', help='選擇模型類別', ) arg_parser.add_argument( '--epochs', type=int, default=32, help='訓練回合數', ) arg_parser.add_argument( '--output-file', default='-', help='預測輸出檔案', ) arg_parser.add_argument( '--input-width', type=int, default=48, help='模型輸入寬度', ) arg_parser.add_argument( '--input-height', type=int, default=48, help='模型輸入高度', ) arg_parser.add_argument( '--load-weights', action='store_true', help='從 --weights-file 指定的檔案載入模型參數', ) arg_parser.add_argument( '--num-gpu', type=int, default=1, help='使用的GPU數量,預設為1', ) arg_parser.add_argument( '--plot-model', help='繪製模型架構圖', ) args = arg_parser.parse_args() # 資料參數 input_height = args.input_height input_width = args.input_width input_channel = 3 input_shape = (input_height, input_width, input_channel) n_classes = 4 # 建立模型 if args.model_type == 'VGG16': input_tensor = Input(shape=input_shape) model = VGG16( input_shape=input_shape, classes=n_classes, weights=None, input_tensor=input_tensor, ) elif args.model_type == 'VGG19': input_tensor = Input(shape=input_shape) model = VGG19( input_shape=input_shape, classes=n_classes, weights=None, input_tensor=input_tensor, ) elif args.model_type == 'ResNet50': input_tensor = Input(shape=input_shape) model = ResNet50( input_shape=input_shape, classes=n_classes, weights=None, input_tensor=input_tensor, ) elif args.model_type == 'DenseNet121': input_tensor = Input(shape=input_shape) model = DenseNet121( input_shape=input_shape, classes=n_classes, weights=None, input_tensor=input_tensor, ) elif args.model_type == 'MobileNetV2': input_tensor = Input(shape=input_shape) model = MobileNetV2( input_shape=input_shape, classes=n_classes, weights=None, input_tensor=input_tensor, ) elif args.model_type == 'custom': model = custom_model(input_shape, n_classes) if args.num_gpu > 1: model = multi_gpu_model(model, gpus=args.num_gpu) if args.plot_model is not None: plot_model(model, to_file=args.plot_model) adam = Adam() model.compile( optimizer=adam, loss='categorical_crossentropy', metrics=['acc'], ) # 搜尋所有圖檔 match_left = os.path.join(args.data_dir, 'left', '*.jpg') paths_left = glob.glob(match_left) match_right = os.path.join(args.data_dir, 'right', '*.jpg') paths_right = glob.glob(match_right) match_stop = os.path.join(args.data_dir, 'stop', '*.jpg') paths_stop = glob.glob(match_stop) match_other = os.path.join(args.data_dir, 'other', '*.jpg') paths_other = glob.glob(match_other) match_test = os.path.join(args.data_dir, 'test', '*.jpg') paths_test = glob.glob(match_test) n_train = len(paths_left) + len(paths_right) + len(paths_stop) + len(paths_other) n_test = len(paths_test) # 初始化資料集矩陣 trainset = np.zeros( shape=(n_train, input_height, input_width, input_channel), dtype='float32', ) label = np.zeros( shape=(n_train, n_classes), dtype='float32', ) testset = np.zeros( shape=(n_test, input_height, input_width, input_channel), dtype='float32', ) # 讀取圖片到資料集 paths_train = paths_left + paths_right + paths_stop + paths_other for ind, path in enumerate(paths_train): image = cv2.imread(path) resized_image = cv2.resize(image, (input_width, input_height)) trainset[ind] = resized_image for ind, path in enumerate(paths_test): image = cv2.imread(path) resized_image = cv2.resize(image, (input_width, input_height)) testset[ind] = resized_image # 設定訓練集的標記 n_left = len(paths_left) n_right = len(paths_right) n_stop = len(paths_stop) n_other = len(paths_other) begin_ind = 0 end_ind = n_left label[begin_ind:end_ind, 0] = 1.0 begin_ind = n_left end_ind = n_left + n_right label[begin_ind:end_ind, 1] = 1.0 begin_ind = n_left + n_right end_ind = n_left + n_right + n_stop label[begin_ind:end_ind, 2] = 1.0 begin_ind = n_left + n_right + n_stop end_ind = n_left + n_right + n_stop + n_other label[begin_ind:end_ind, 3] = 1.0 # 正規化數值到 0~1 之間 trainset = trainset / 255.0 testset = testset / 255.0 # 載入模型參數 if args.load_weights: model.load_weights(args.weights_file) # 訓練模型 if args.epochs > 0: model.fit( trainset, label, epochs=args.epochs, validation_split=0.2, # batch_size=64, ) # 儲存模型架構及參數 model_desc = model.to_json() with open(args.model_file, 'w') as file_model: file_model.write(model_desc) model.save_weights(args.weights_file) # 執行預測 if testset.shape[0] != 0: result_onehot = model.predict(testset) result_sparse = np.argmax(result_onehot, axis=1) else: result_sparse = list() # 印出預測結果 if args.output_file == '-': print('檔名\t預測類別') for path, label_id in zip(paths_test, result_sparse): filename = os.path.basename(path) if label_id == 0: label_name = 'left' elif label_id == 1: label_name = 'right' elif label_id == 2: label_name = 'stop' elif label_id == 3: label_name = 'other' print('%s\t%s' % (filename, label_name)) else: with open(args.output_file, 'w') as file_out: file_out.write('檔名\t預測類別\n') for path, label_id in zip(paths_test, result_sparse): filename = os.path.basename(path) if label_id == 0: label_name = 'left' elif label_id == 1: label_name = 'right' elif label_id == 2: label_name = 'stop' elif label_id == 3: label_name = 'other' file_out.write('%s\t%s\n' % (filename, label_name)) if __name__ == '__main__': main() ```

keras LSTM时序预测 loss和accuracy基本不变

使用keras进行LSTM时序预测,我改变了epoch,但训练过程中的loss始终为0.05左右,accuracy始终为0.5左右,测试集上的loss和accuracy也是这两个数,请问是出现了什么问题,怎么解决呢

keras薛定谔的训练结果问题

刚刚开始学习keras,今天在测试非线性函数拟合的时候发现即便用了‘relu’激活函数还是没有办法很好的拟合结果,这已经困扰我很久了,而且更奇怪的是有一句看起来和结果毫无关系的语句居然会直接改变结果的分布 就是这一句: ``` print(y_pred) ``` 没有加的时候的结果: ![图片说明](https://img-ask.csdn.net/upload/202004/24/1587719740_46631.jpg) 加了之后的结果: ![图片说明](https://img-ask.csdn.net/upload/202004/24/1587719761_631438.jpg) 或者 ![图片说明](https://img-ask.csdn.net/upload/202004/24/1587719776_946600.jpg) 代码如下: ``` import keras import numpy as np import matplotlib.pyplot as plt #按顺序构成的模型 from keras.models import Sequential #全连接层 from keras.layers import Dense,Activation from keras.optimizers import SGD #使用numpy生成随机数据 x_data = np.linspace(-0.5,0.5,200) noise = np.random.normal(0,0.02,x_data.shape) y_data = np.square(x_data) + noise #显示随机点 plt.scatter(x_data,y_data) plt.show() # 构建一个顺序模型 model = Sequential() # 在模型中添加一个全连接层 model.add(Dense(units=10,input_dim=1,activation='relu')) # model.add(Activation("relu"))不行? #model.add(Activation("relu")) model.add(Dense(units=1,activation='relu')) # model.add(Activation("relu"))不行 #model.add(Activation("relu")) # 定义优化算法 sgd = SGD(lr=0.3) model.compile(optimizer=sgd,loss="mse") for step in range(3000): cost = model.train_on_batch(x_data,y_data) if step%500==0: print("cost: ",cost) W,b = model.layers[0].get_weights() print("W: ",W,"b: ",b) # x_data输入网络中,得到预测值 y_pred = model.predict(x_data) # 加不加这一句会对结果造成直接影响 print(y_pred) plt.scatter(x_data,y_pred) plt.plot(x_data,y_pred,"r-",lw=3) plt.show() ```

训练模型 loss没有变化 哪里出错了

![图片说明](https://img-ask.csdn.net/upload/201902/20/1550603855_716468.jpg) ![图片说明](https://img-ask.csdn.net/upload/201902/20/1550603869_798720.jpg) ![图片说明](https://img-ask.csdn.net/upload/201902/20/1550603889_814072.jpg)

keras model 训练 train_loss,train_acc再变,但是val_loss,val_test却一直不变,是哪里有问题?

Epoch 1/15 3112/3112 [==============================] - 73s 237ms/step - loss: 8.1257 - acc: 0.4900 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 2/15 3112/3112 [==============================] - 71s 231ms/step - loss: 8.1730 - acc: 0.4929 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 3/15 3112/3112 [==============================] - 72s 232ms/step - loss: 8.1730 - acc: 0.4929 - val_loss: 8.1763 - val_acc: 0.4427 Epoch 4/15 3112/3112 [==============================] - 71s 229ms/step - loss: 7.0495 - acc: 0.5617 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 5/15 3112/3112 [==============================] - 71s 230ms/step - loss: 5.5504 - acc: 0.6549 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 6/15 3112/3112 [==============================] - 71s 230ms/step - loss: 4.9359 - acc: 0.6931 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 7/15 3112/3112 [==============================] - 71s 230ms/step - loss: 4.8969 - acc: 0.6957 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 8/15 3112/3112 [==============================] - 72s 234ms/step - loss: 4.9446 - acc: 0.6925 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 9/15 3112/3112 [==============================] - 71s 231ms/step - loss: 4.5114 - acc: 0.7201 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 10/15 3112/3112 [==============================] - 73s 237ms/step - loss: 4.7944 - acc: 0.7021 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 11/15 3112/3112 [==============================] - 74s 240ms/step - loss: 4.6789 - acc: 0.7095 - val_loss: 8.1763 - val_acc: 0.4927

保存使用keras训练的TF模型,然后在Go中进行评估

<div class="post-text" itemprop="text"> <p>I'm trying to setup a classical MNIST challenge model with <code>keras</code>, then save the <code>tensorflow</code> graph and subsequently load it in <code>Go</code> and evaluate with some input. I've been following <a href="https://nilsmagnus.github.io/post/go-tensorflow/" rel="nofollow noreferrer">this article</a> which supplies full code on <a href="https://github.com/nilsmagnus/tensorflow-with-go" rel="nofollow noreferrer">github</a>. Nils is using just tensorflow to setup the comp.graph but I would like to use <code>keras</code>. I managd to save the model the same way as he does </p> <p>model:</p> <pre><code> model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28,28,1), name="inputNode")) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(10, activation='softmax', name="inferNode")) </code></pre> <p>which runs ok, trains and evaluates and then saving as posted above:</p> <pre><code>builder = tf.saved_model.builder.SavedModelBuilder("mnistmodel_my") # GOLANG note that we must tag our model so that we can retrieve it at inference-time builder.add_meta_graph_and_variables(sess, ["serve"]) builder.save() </code></pre> <p>Which I then try to evaluate as :</p> <pre><code>result, runErr := model.Session.Run( map[tf.Output]*tf.Tensor{ model.Graph.Operation("inputNode").Output(0): tensor, }, []tf.Output{ model.Graph.Operation("inferNode").Output(0), }, nil, ) </code></pre> <p>In Go I follow the example but when evaluating, I get:</p> <pre><code> panic: nil-Operation. If the Output was created with a Scope object, see Scope.Err() for details. goroutine 1 [running]: github.com/tensorflow/tensorflow/tensorflow/go.Output.c(0x0, 0x0, 0x0, 0x0) /Users/air/go/src/github.com/tensorflow/tensorflow/tensorflow/go/operation.go:119 +0xbb github.com/tensorflow/tensorflow/tensorflow/go.newCRunArgs(0xc42006e210, 0xc420047ef0, 0x1, 0x1, 0x0, 0x0, 0x0, 0xc4200723c8) /Users/air/go/src/github.com/tensorflow/tensorflow/tensorflow/go/session.go:307 +0x22d github.com/tensorflow/tensorflow/tensorflow/go.(*Session).Run(0xc420078060, 0xc42006e210, 0xc420047ef0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /Users/air/go/src/github.com/tensorflow/tensorflow/tensorflow/go/session.go:85 +0x153 main.main() /Users/air/PycharmProjects/GoTensor/custom.go:36 +0x341 exit status 2 </code></pre> <p>Since it says <code>nil-Operation</code> I think I might have incorrectly labelled the nodes. But I don't know which other nodes should I then label?</p> <p>Many thanks!!!</p> </div>

为什么我使用TensorFlow2.0训练的时候loss的变化那么奇怪?

我使用tf2.0搭建了一个deepfm模型用来做一个二分类预测。 训练过程中train loss一直在下降,val loss却一直在上升,并且训练到一半内存就不够用了,这是怎么一回事? ``` Train on 19532 steps, validate on 977 steps Epoch 1/5 19532/19532 [==============================] - 549s 28ms/step - loss: 0.4660 - AUC: 0.8519 - val_loss: 1.0059 - val_AUC: 0.5829 Epoch 2/5 19532/19532 [==============================] - 522s 27ms/step - loss: 0.1861 - AUC: 0.9787 - val_loss: 1.7618 - val_AUC: 0.5590 Epoch 3/5 17150/19532 [=========================>....] - ETA: 1:06 - loss: 0.0877 - AUC: 0.9951 Process finished with exit code 137 ``` 还有个问题,我在设计过程中关闭了eager模式,必须使用了下面代码进行初始化: ``` sess.run([tf.compat.v1.global_variables_initializer(), tf.compat.v1.tables_initializer()]) ``` 但我的代码中使用了其他的初始化方法: ``` initializer = tf.keras.initializers.TruncatedNormal(stddev=stddev, seed=29) regularizer = tf.keras.regularizers.l2(l2_reg) .... dnn_hidden_layer_3 = tf.keras.layers.Dense(64, activation='selu', kernel_initializer=initializer, kernel_regularizer=regularizer)(dnn_hidden_layer_2) .... ``` 我这样做他还是按我定义的初始化方法那样初始化吗? 本人小白,在这里先跪谢大家了!

Keras如何在自定义loss函数的时候乘或者加上一个变化的值?

损失函数一旦编译后好像之和y_pre与y_true有关了。我想定义一个损失函数,可以跟上一次的loss也有关。

保存keras模型时出现的问题

求助各路大神,小弟最近用keras跑神经网络模型,在训练和测试时都很好没问题,但是在保存时出现问题 小弟保存模型用的语句: json_string = model.to_json() open('my_model_architecture.json', 'w').write(json_string) #保存网络结构 model.save_weights('my_model_weights.h5',overwrite='true') #保存权重 但是运行后会显示Process finished with exit code -1073741819 (0xC0000005) 然后保存权重的.h5文件没有内容 求助各位大神是怎么回事啊

keras模型的预测(predict)结果全是0

使用keras搭了一个模型并且对其进行了训练,得到模型在百度云盘中:链接:https://pan.baidu.com/s/1wQ5MLhPDfhwlveY-ib92Ew 密码:f3gk, 使用keras.predict时,无论模型输入什么输出都是0,代码如下: ```python from keras.models import Sequential, Model from keras.layers.convolutional_recurrent import ConvLSTM2D from keras.layers.normalization import BatchNormalization from keras.utils import plot_model from keras.models import load_model from keras import metrics import numpy as np import os import json import keras import matplotlib.pyplot as plt import math from keras import losses import shutil from keras import backend as K from keras import optimizers # 定义损失函数 def my_loss(y_true, y_pred): if not K.is_tensor(y_pred): y_pred = K.constant(y_pred, dtype = 'float64') y_true = K.cast(y_true, y_pred.dtype) return K.mean(K.abs((y_true - y_pred) / K.clip(K.abs(y_true), K.epsilon(), None))) # 定义评价函数metrics def mean_squared_percentage_error(y_true, y_pred): if not K.is_tensor(y_pred): y_pred = K.constant(y_pred, dtype = 'float64') y_true = K.cast(y_true, y_pred.dtype) return K.mean(K.square((y_pred - y_true)/K.clip(K.abs(y_true),K.epsilon(), None))) model_path = os.path.join('model/model' ,'model.h5') seq = load_model(model_path, custom_objects={'my_loss': my_loss,'mean_squared_percentage_error':mean_squared_percentage_error}) print (seq.summary()) input_data = np.random.random([1, 12, 56, 56, 1]) output_data = seq.predict(input_data, batch_size=16, verbose=1) print (output_data[0][:,:,0]) ``` 输出如下: ```python Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv_lst_m2d_1 (ConvLSTM2D) (None, None, 56, 56, 40) 59200 _________________________________________________________________ batch_normalization_1 (Batch (None, None, 56, 56, 40) 160 _________________________________________________________________ conv_lst_m2d_2 (ConvLSTM2D) (None, None, 56, 56, 40) 115360 _________________________________________________________________ batch_normalization_2 (Batch (None, None, 56, 56, 40) 160 _________________________________________________________________ conv_lst_m2d_3 (ConvLSTM2D) (None, 56, 56, 1) 1480 ================================================================= Total params: 176,360 Trainable params: 176,200 Non-trainable params: 160 None 1/1 [==============================] - 1s 812ms/step [[ 0. 0. 0. ... 0. 0. 0.] [ 0. 0. 0. ... 0. 0. 0.] [ 0. 0. 0. ... 0. 0. 0.] ... [ 0. 0. 0. ... 0. 0. 0.] [ 0. 0. 0. ... 0. 0. 0.] [ 0. 0. 0. ... 0. 0. -0.]] ``` 不懂为什么会这样,即便随机生成一组数据作为输入,结果也是这样

2019 Python开发者日-培训

2019 Python开发者日-培训

150讲轻松搞定Python网络爬虫

150讲轻松搞定Python网络爬虫

设计模式(JAVA语言实现)--20种设计模式附带源码

设计模式(JAVA语言实现)--20种设计模式附带源码

YOLOv3目标检测实战:训练自己的数据集

YOLOv3目标检测实战:训练自己的数据集

java后台+微信小程序 实现完整的点餐系统

java后台+微信小程序 实现完整的点餐系统

三个项目玩转深度学习(附1G源码)

三个项目玩转深度学习(附1G源码)

初级玩转Linux+Ubuntu(嵌入式开发基础课程)

初级玩转Linux+Ubuntu(嵌入式开发基础课程)

2019 AI开发者大会

2019 AI开发者大会

玩转Linux:常用命令实例指南

玩转Linux:常用命令实例指南

一学即懂的计算机视觉(第一季)

一学即懂的计算机视觉(第一季)

4小时玩转微信小程序——基础入门与微信支付实战

4小时玩转微信小程序——基础入门与微信支付实战

Git 实用技巧

Git 实用技巧

Python数据清洗实战入门

Python数据清洗实战入门

使用TensorFlow+keras快速构建图像分类模型

使用TensorFlow+keras快速构建图像分类模型

实用主义学Python(小白也容易上手的Python实用案例)

实用主义学Python(小白也容易上手的Python实用案例)

程序员的算法通关课:知己知彼(第一季)

程序员的算法通关课:知己知彼(第一季)

MySQL数据库从入门到实战应用

MySQL数据库从入门到实战应用

机器学习初学者必会的案例精讲

机器学习初学者必会的案例精讲

手把手实现Java图书管理系统(附源码)

手把手实现Java图书管理系统(附源码)

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

.net core快速开发框架

.net core快速开发框架

玩转Python-Python3基础入门

玩转Python-Python3基础入门

Python数据挖掘简易入门

Python数据挖掘简易入门

微信公众平台开发入门

微信公众平台开发入门

程序员的兼职技能课

程序员的兼职技能课

Windows版YOLOv4目标检测实战:训练自己的数据集

Windows版YOLOv4目标检测实战:训练自己的数据集

HoloLens2开发入门教程

HoloLens2开发入门教程

微信小程序开发实战

微信小程序开发实战

Java8零基础入门视频教程

Java8零基础入门视频教程

相关热词 c# 解析cad c#数字格式化万 c#int转byte c#格式化日期 c# wpf 表格 c# 实现ad域验证登录 c#心跳包机制 c#使用fmod.dll c#dll vb 调用 c# outlook
立即提问
相关内容推荐