keras model 训练 train_loss,train_acc再变,但是val_loss,val_test却一直不变,是哪里有问题?

Epoch 1/15
3112/3112 [==============================] - 73s 237ms/step - loss: 8.1257 - acc: 0.4900 - val_loss: 8.1763 - val_acc: 0.4927
Epoch 2/15
3112/3112 [==============================] - 71s 231ms/step - loss: 8.1730 - acc: 0.4929 - val_loss: 8.1763 - val_acc: 0.4927
Epoch 3/15
3112/3112 [==============================] - 72s 232ms/step - loss: 8.1730 - acc: 0.4929 - val_loss: 8.1763 - val_acc: 0.4427
Epoch 4/15
3112/3112 [==============================] - 71s 229ms/step - loss: 7.0495 - acc: 0.5617 - val_loss: 8.1763 - val_acc: 0.4927
Epoch 5/15
3112/3112 [==============================] - 71s 230ms/step - loss: 5.5504 - acc: 0.6549 - val_loss: 8.1763 - val_acc: 0.4927
Epoch 6/15
3112/3112 [==============================] - 71s 230ms/step - loss: 4.9359 - acc: 0.6931 - val_loss: 8.1763 - val_acc: 0.4927
Epoch 7/15
3112/3112 [==============================] - 71s 230ms/step - loss: 4.8969 - acc: 0.6957 - val_loss: 8.1763 - val_acc: 0.4927
Epoch 8/15
3112/3112 [==============================] - 72s 234ms/step - loss: 4.9446 - acc: 0.6925 - val_loss: 8.1763 - val_acc: 0.4927
Epoch 9/15
3112/3112 [==============================] - 71s 231ms/step - loss: 4.5114 - acc: 0.7201 - val_loss: 8.1763 - val_acc: 0.4927
Epoch 10/15
3112/3112 [==============================] - 73s 237ms/step - loss: 4.7944 - acc: 0.7021 - val_loss: 8.1763 - val_acc: 0.4927
Epoch 11/15
3112/3112 [==============================] - 74s 240ms/step - loss: 4.6789 - acc: 0.7095 - val_loss: 8.1763 - val_acc: 0.4927

1个回答

说明你的样本太少,明显已经过拟合了。

caozhy
贵阳老马马善福专业维修游泳池堵漏防水工程 回复weixin_42062762: 模型简单,某些权重趋向于0或者极大,再训练造成正向传播在特定上的样本结果不变,很正常
8 个月之前 回复
weixin_42062762
weixin_42062762 为什么过拟合验证分数一点不变呢!
8 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
keras model.fit_generator训练完一个epoch之后无法加载训练集怎么处理?
1、在训练神经网络的过程中遇到了训练完一个epoch之后无法继续训练的问题,具体问题截图如下 ![图片说明](https://img-ask.csdn.net/upload/202002/08/1581151633_972155.png) 数据生成的代码如下 ``` def GET_DATASET_SHUFFLE(train_x, train_y, batch_size): #random.shuffle(X_samples) batch_num = int(len(train_x) / batch_size) max_len = batch_num * batch_size X_samples = np.array(train_x[0:max_len]) Y_samples = np.array(train_y[0:max_len]) X_batches = np.split(X_samples, batch_num) Y_batches = np.split(Y_samples, batch_num) for i in range(batch_num): x = np.array(list(map(load_image, X_batches[i]))) y = np.array(list(map(load_label, Y_batches[i]))) yield x, y ``` 想要向各位大神请教一下,刚刚接触这个不是太懂
keras model.predict_classes() 问题
keras model.predict_classes()只能适用于sequential model, 对于Model 模型(functional model)该怎么达到类似的输出类别的效果 e.g results = list(model.predict_classes(data_test,verbose = 1)) score = accuracy_score(label_test,results) 这种在sequential model上可行的方法,如何在functional model 达到相同的效果
运行结果如下:train(generator,discriminator,gan_model,latent_dim) NameError: name 'train' is not defined,请问如何解决
import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from numpy import hstack from numpy import zeros from numpy import ones from numpy.random import rand from numpy.random import randn from keras.models import Sequential from keras.layers import Dense def define_discriminator(n_inputs=2): model=Sequential() model.add(Dense(25, activation='relu',kernel_initializer='he_uniform',input_dim=n_inputs)) model.add(Dense(1,activation='sigmoid')) model.compile(loss='binary_crossentropy',optimizer='adam', metrics=['accuracy']) return model def define_generator(latent_dim,n_outputs=2): model=Sequential() model.add(Dense(15, activation='relu',kernel_initializer='he_uniform', input_dim=latent_dim)) model.add(Dense(n_outputs,activation='linear')) return model def define_gan(generator,discriminator): discriminator.trainable=False model=Sequential() model.add(generator) model.add(discriminator) model.compile(loss='binary_crossentropy',optimizer='adam') return model def generate_real_samples(n): x1=rand(n)-0.5 x2=x1*x1 x1=x1.reshape(n,1) x2=x2.reshape(n,1) x=hstack((x1,x2)) y=ones((n,1)) return x,y def generate_latent_points(latent_dim,n): x_input=randn(latent_dim*n) x_input=x_input.reshape(n,latent_dim) return x_input def generate_fake_samples(generator,latent_dim,n): x_input=generate_latent_points(latent_dim,n) x=generator.predict(x_input) y=zeros((n,1)) return x,y def summarize_performance(epoch,generator,discriminator,latent_dim,n=100): x_real,y_real=generate_real_samples(n) _,acc_real=discriminator.evaluate(x_real,y_real,verbose=0) x_fake, y_fake = generate_fake_samples(generator,latent_dim,n) _, acc_fake = discriminator.evaluate(x_fake, y_fake, verbose=0) print(epoch,acc_real,acc_fake) plt.scatter(x_real[:,0],x_real[:,1],color='red') plt.scatter(x_fake[:, 0], x_fake[:, 1], color='blue') plt.show() def train(g_model,d_model,gan_model,latent_dim,n_epochs=10000,n_batch=128,n_eval=2000): half_batch=int(n_batch/2) for i in range(n_epochs): x_real,y_real=generate_real_samples(half_batch) x_fake,y_fake=generate_fake_samples(g_model,latent_dim,half_batch) d_model.train_on_batch(x_real,y_real) d_model.train_on_batch(x_fake, y_fake) x_gan=generate_latent_points(latent_dim,n_batch) y_gan=ones((n_batch,1)) gan_model.train_on_batch(x_gan,y_gan) if(i+1)%n_epochs==0: summarize_performance(i,g_model,d_model,latent_dim) latent_dim=5 discriminator=define_discriminator() generator=define_generator(latent_dim) gan_model=define_gan(generator,discriminator) train(generator,discriminator,gan_model,latent_dim) 问题
fashion_mnist识别准确率问题
fashion_mnist识别准确率一般为多少呢?我看好多人都是92%左右,但是我用一个网络达到了94%,想问问做过的小伙伴到底是多少? ``` #这是我的结果示意 x_shape: (60000, 28, 28) y_shape: (60000,) epoches: 0 val_acc: 0.4991 train_acc 0.50481665 epoches: 1 val_acc: 0.6765 train_acc 0.66735 epoches: 2 val_acc: 0.755 train_acc 0.7474 epoches: 3 val_acc: 0.7846 train_acc 0.77915 epoches: 4 val_acc: 0.798 train_acc 0.7936 epoches: 5 val_acc: 0.8082 train_acc 0.80365 epoches: 6 val_acc: 0.8146 train_acc 0.8107 epoches: 7 val_acc: 0.8872 train_acc 0.8872333 epoches: 8 val_acc: 0.896 train_acc 0.89348334 epoches: 9 val_acc: 0.9007 train_acc 0.8986 epoches: 10 val_acc: 0.9055 train_acc 0.90243334 epoches: 11 val_acc: 0.909 train_acc 0.9058833 epoches: 12 val_acc: 0.9112 train_acc 0.90868336 epoches: 13 val_acc: 0.9126 train_acc 0.91108334 epoches: 14 val_acc: 0.9151 train_acc 0.9139 epoches: 15 val_acc: 0.9172 train_acc 0.91595 epoches: 16 val_acc: 0.9191 train_acc 0.91798335 epoches: 17 val_acc: 0.9204 train_acc 0.91975 epoches: 18 val_acc: 0.9217 train_acc 0.9220333 epoches: 19 val_acc: 0.9252 train_acc 0.9234667 epoches: 20 val_acc: 0.9259 train_acc 0.92515 epoches: 21 val_acc: 0.9281 train_acc 0.9266667 epoches: 22 val_acc: 0.9289 train_acc 0.92826664 epoches: 23 val_acc: 0.9301 train_acc 0.93005 epoches: 24 val_acc: 0.9315 train_acc 0.93126667 epoches: 25 val_acc: 0.9322 train_acc 0.9328 epoches: 26 val_acc: 0.9331 train_acc 0.9339667 epoches: 27 val_acc: 0.9342 train_acc 0.93523335 epoches: 28 val_acc: 0.9353 train_acc 0.93665 epoches: 29 val_acc: 0.9365 train_acc 0.9379333 epoches: 30 val_acc: 0.9369 train_acc 0.93885 epoches: 31 val_acc: 0.9387 train_acc 0.9399 epoches: 32 val_acc: 0.9395 train_acc 0.9409 epoches: 33 val_acc: 0.94 train_acc 0.9417667 epoches: 34 val_acc: 0.9403 train_acc 0.94271666 epoches: 35 val_acc: 0.9409 train_acc 0.9435167 epoches: 36 val_acc: 0.9418 train_acc 0.94443333 epoches: 37 val_acc: 0.942 train_acc 0.94515 epoches: 38 val_acc: 0.9432 train_acc 0.9460667 epoches: 39 val_acc: 0.9443 train_acc 0.9468833 epoches: 40 val_acc: 0.9445 train_acc 0.94741666 epoches: 41 val_acc: 0.9462 train_acc 0.9482 epoches: 42 val_acc: 0.947 train_acc 0.94893336 epoches: 43 val_acc: 0.9472 train_acc 0.94946665 epoches: 44 val_acc: 0.948 train_acc 0.95028335 epoches: 45 val_acc: 0.9486 train_acc 0.95095 epoches: 46 val_acc: 0.9488 train_acc 0.9515833 epoches: 47 val_acc: 0.9492 train_acc 0.95213336 epoches: 48 val_acc: 0.9495 train_acc 0.9529833 epoches: 49 val_acc: 0.9498 train_acc 0.9537 val_acc: 0.9498 ``` ``` import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt def to_onehot(y,num): lables = np.zeros([num,len(y)]) for i in range(len(y)): lables[y[i],i] = 1 return lables.T # 预处理数据 mnist = keras.datasets.fashion_mnist (train_images,train_lables),(test_images,test_lables) = mnist.load_data() print('x_shape:',train_images.shape) #(60000) print('y_shape:',train_lables.shape) X_train = train_images.reshape((-1,train_images.shape[1]*train_images.shape[1])) / 255.0 #X_train = tf.reshape(X_train,[-1,X_train.shape[1]*X_train.shape[2]]) Y_train = to_onehot(train_lables,10) X_test = test_images.reshape((-1,test_images.shape[1]*test_images.shape[1])) / 255.0 Y_test = to_onehot(test_lables,10) #双隐层的神经网络 input_nodes = 784 output_nodes = 10 layer1_nodes = 100 layer2_nodes = 50 batch_size = 100 learning_rate_base = 0.8 learning_rate_decay = 0.99 regularization_rate = 0.0000001 epochs = 50 mad = 0.99 learning_rate = 0.005 # def inference(input_tensor,avg_class,w1,b1,w2,b2): # if avg_class == None: # layer1 = tf.nn.relu(tf.matmul(input_tensor,w1)+b1) # return tf.nn.softmax(tf.matmul(layer1,w2) + b2) # else: # layer1 = tf.nn.relu(tf.matmul(input_tensor,avg_class.average(w1)) + avg_class.average(b1)) # return tf.matual(layer1,avg_class.average(w2)) + avg_class.average(b2) def train(mnist): X = tf.placeholder(tf.float32,[None,input_nodes],name = "input_x") Y = tf.placeholder(tf.float32,[None,output_nodes],name = "y_true") w1 = tf.Variable(tf.truncated_normal([input_nodes,layer1_nodes],stddev=0.1)) b1 = tf.Variable(tf.constant(0.1,shape=[layer1_nodes])) w2 = tf.Variable(tf.truncated_normal([layer1_nodes,layer2_nodes],stddev=0.1)) b2 = tf.Variable(tf.constant(0.1,shape=[layer2_nodes])) w3 = tf.Variable(tf.truncated_normal([layer2_nodes,output_nodes],stddev=0.1)) b3 = tf.Variable(tf.constant(0.1,shape=[output_nodes])) layer1 = tf.nn.relu(tf.matmul(X,w1)+b1) A2 = tf.nn.relu(tf.matmul(layer1,w2)+b2) A3 = tf.nn.relu(tf.matmul(A2,w3)+b3) y_hat = tf.nn.softmax(A3) # y_hat = inference(X,None,w1,b1,w2,b2) # global_step = tf.Variable(0,trainable=False) # variable_averages = tf.train.ExponentialMovingAverage(mad,global_step) # varible_average_op = variable_averages.apply(tf.trainable_variables()) #y = inference(x,variable_averages,w1,b1,w2,b2) cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=A3,labels=Y)) regularizer = tf.contrib.layers.l2_regularizer(regularization_rate) regularization = regularizer(w1) + regularizer(w2) +regularizer(w3) loss = cross_entropy + regularization * regularization_rate # learning_rate = tf.train.exponential_decay(learning_rate_base,global_step,epchos,learning_rate_decay) # train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step) train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # with tf.control_dependencies([train_step,varible_average_op]): # train_op = tf.no_op(name="train") correct_prediction = tf.equal(tf.argmax(y_hat,1),tf.argmax(Y,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) total_loss = [] val_acc = [] total_train_acc = [] x_Xsis = [] with tf.Session() as sess: tf.global_variables_initializer().run() for i in range(epochs): # x,y = next_batch(X_train,Y_train,batch_size) batchs = int(X_train.shape[0] / batch_size + 1) loss_e = 0. for j in range(batchs): batch_x = X_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:] batch_y = Y_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:] sess.run(train_step,feed_dict={X:batch_x,Y:batch_y}) loss_e += sess.run(loss,feed_dict={X:batch_x,Y:batch_y}) # train_step.run(feed_dict={X:x,Y:y}) validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test}) train_acc = sess.run(accuracy,feed_dict={X:X_train,Y:Y_train}) print("epoches: ",i,"val_acc: ",validate_acc,"train_acc",train_acc) total_loss.append(loss_e / batch_size) val_acc.append(validate_acc) total_train_acc.append(train_acc) x_Xsis.append(i) validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test}) print("val_acc: ",validate_acc) return (x_Xsis,total_loss,total_train_acc,val_acc) result = train((X_train,Y_train,X_test,Y_test)) def plot_acc(total_train_acc,val_acc,x): plt.figure() plt.plot(x,total_train_acc,'--',color = "red",label="train_acc") plt.plot(x,val_acc,color="green",label="val_acc") plt.xlabel("Epoches") plt.ylabel("acc") plt.legend() plt.show() ```
pycharm使用keras出现进度条信息多行打印
最近在用pycharm运行keras方面的代码时,会出现进度条多行打印问题,不知道是什么原因,但是我把代码放在Spyder上运行时,进度条是正常单行更新的,代码是深度学习的一个例程。在百度上也没搜到好的解决方法,恳请大家能帮忙解决这个问题, ``` from keras import layers,models from keras.datasets import mnist from keras.utils import to_categorical (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images = train_images.reshape((60000,28,28,1)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28,28,1)) test_images = test_images.astype('float32')/255 train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) model = models.Sequential() model.add(layers.Conv2D(32,(3,3),activation='relu',input_shape=(28,28,1))) model.add(layers.MaxPool2D(2,2)) model.add(layers.Conv2D(64,(3,3),activation='relu')) model.add(layers.MaxPool2D(2,2)) model.add(layers.Conv2D(64,(3,3),activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64,activation='relu')) model.add(layers.Dense(10,activation='softmax')) model.summary() model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(train_images,train_labels,epochs=6,batch_size=64) #test_loss,test_acc = model.evaluate(test_images,test_labels) # print(test_loss,test_acc) ``` ![图片说明](https://img-ask.csdn.net/upload/201910/07/1570448232_727191.png)
tensorflow自定义的损失函数 focal_loss出现inf,在训练过程中出现inf
![图片说明](https://img-ask.csdn.net/upload/201905/05/1557048780_248292.png) ``` python def focal_loss(alpha=0.25, gamma=2.): """ focal loss used for train positive/negative samples rate out of balance, improve train performance """ def focal_loss_calc(y_true, y_pred): positive = tf.where(tf.equal(y_true, 1), y_pred, tf.ones_like(y_pred)) negative = tf.where(tf.equal(y_true, 0), y_pred, tf.zeros_like(y_pred)) return -(alpha*K.pow(1.-positive, gamma)*K.log(positive) + (1-alpha)*K.pow(negative, gamma)*K.log(1.-negative)) return focal_loss_calc ``` ```python self.keras_model.compile(optimizer=optimizer, loss=dice_focal_loss, metrics=[ mean_iou, dice_loss, focal_loss()]) ``` 上面的focal loss 开始还是挺正常的,随着训练过程逐渐减小大0.025左右,然后就突然变成inf。何解
keras多GPU训练,其中一块无法调用
已经按multi_gpu_model进行了设置 但是运行的时候还是只能调用一个GPU,另一张计算卡完全没用,是什么原因呢? ``` from keras.utils import multi_gpu_model ... model = build_model() optimizer = keras.optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-06) model_parallel=multi_gpu_model(model,2) model_parallel.compile(loss='mse', optimizer=optimizer, metrics=['mae']) ... history = model_parallel.fit(train_data, y_train, epochs=EPOCHS, validation_split=0.2, verbose=1,callbacks=[PrintDot()]) ``` ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573813427_19477.jpg) ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573813436_730710.jpg)
在Spyder界面中使用tensorflow进行fashion_mnist数据集学习,结果loss为非数,并且准确率一直未变
1.建立了一个3个全连接层的神经网络; 2.代码如下: ``` import matplotlib as mpl import matplotlib.pyplot as plt #%matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time import tensorflow as tf from tensorflow import keras print(tf.__version__) print(sys.version_info) for module in mpl, np, sklearn, tf, keras: print(module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data() x_valid, x_train = x_train_all[:5000], x_train_all[5000:] y_valid, y_train = y_train_all[:5000], y_train_all[5000:] #tf.keras.models.Sequential model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape= [28,28])) model.add(keras.layers.Dense(300, activation="relu")) model.add(keras.layers.Dense(100, activation="relu")) model.add(keras.layers.Dense(10,activation="softmax")) ###sparse为最后输出为index类型,如果为one hot类型,则不需加sparse model.compile(loss = "sparse_categorical_crossentropy",optimizer = "sgd", metrics = ["accuracy"]) #model.layers #model.summary() history = model.fit(x_train, y_train, epochs=10, validation_data=(x_valid,y_valid)) ``` 3.输出结果: ``` runfile('F:/new/new world/deep learning/tensorflow/ex2/tf_keras_classification_model.py', wdir='F:/new/new world/deep learning/tensorflow/ex2') 2.0.0 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0) matplotlib 3.1.1 numpy 1.16.5 sklearn 0.21.3 tensorflow 2.0.0 tensorflow_core.keras 2.2.4-tf Train on 55000 samples, validate on 5000 samples Epoch 1/10 WARNING:tensorflow:Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: 55000/55000 [==============================] - 3s 58us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 2/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 3/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 4/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 5/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 6/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 7/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 8/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 9/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 10/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 ```
keras yolov3 tiny_yolo_body网络结构改为vgg16结构
将keras框架yolov3 tiny_yolo_body网络结构改为vgg16网络结构,程序能够运行 loss正常下降即可。
基于keras,使用imagedatagenerator.flow函数读入数据,训练集ACC极低
在做字符识别的神经网络,数据集是用序号标好名称的图片,标签取图片的文件名。想用Imagedatagenrator 函数和flow函数,增加样本的泛化性,然后生成数据传入网络,可是这样acc=1/类别数,基本为零。请问哪里出了问题 ``` datagen = ImageDataGenerator( width_shift_range=0.1, height_shift_range=0.1 ) def read_train_image(self, name): myimg = Image.open(name).convert('RGB') return np.array(myimg) def train(self): #训练集 train_img_list = [] train_label_list = [] #测试集 test_img_list = [] test_label_list = [] for file in os.listdir('train'): files_img_in_array = self.read_train_image(name='train/' + file) train_img_list.append(files_img_in_array) # Image list add up train_label_list.append(int(file.split('_')[0])) # lable list addup for file in os.listdir('test'): files_img_in_array = self.read_train_image(name='test/' + file) test_img_list.append(files_img_in_array) # Image list add up test_label_list.append(int(file.split('_')[0])) # lable list addup train_img_list = np.array(train_img_list) train_label_list = np.array(train_label_list) test_img_list = np.array(train_img_list) test_label_list = np.array(train_label_list) train_label_list = np_utils.to_categorical(train_label_list, 5788) test_label_list = np_utils.to_categorical(test_label_list, 5788) train_img_list = train_img_list.astype('float32') test_img_list = test_img_list.astype('float32') test_img_list /= 255.0 train_img_list /= 255.0 ``` 这是图片数据的处理,图片和标签都存到list里。下面是用fit_genrator训练 ``` model.fit_generator( self.datagen.flow(x=train_img_list, y=train_label_list, batch_size=2), samples_per_epoch=len(train_img_list), epochs=10, validation_data=(test_img_list,test_label_list), ) ```
tensorflow 报错You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3],但是怎么看数据都没错,请大神指点
调试googlenet的代码,总是报错 InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3],但是我怎么看喂的数据都没问题,请大神们指点 ``` # -*- coding: utf-8 -*- """ GoogleNet也被称为InceptionNet Created on Mon Feb 10 12:15:35 2020 @author: 月光下的云海 """ import tensorflow as tf from keras.datasets import cifar10 import numpy as np import tensorflow.contrib.slim as slim tf.reset_default_graph() tf.reset_default_graph() (x_train,y_train),(x_test,y_test) = cifar10.load_data() x_train = x_train.astype('float32') x_test = x_test.astype('float32') y_train = y_train.astype('int32') y_test = y_test.astype('int32') y_train = y_train.reshape(y_train.shape[0]) y_test = y_test.reshape(y_test.shape[0]) x_train = x_train/255 x_test = x_test/255 #************************************************ 构建inception ************************************************ #构建一个多分支的网络结构 #INPUTS: # d0_1:最左边的分支,分支0,大小为1*1的卷积核个数 # d1_1:左数第二个分支,分支1,大小为1*1的卷积核的个数 # d1_3:左数第二个分支,分支1,大小为3*3的卷积核的个数 # d2_1:左数第三个分支,分支2,大小为1*1的卷积核的个数 # d2_5:左数第三个分支,分支2,大小为5*5的卷积核的个数 # d3_1:左数第四个分支,分支3,大小为1*1的卷积核的个数 # scope:参数域名称 # reuse:是否重复使用 #*************************************************************************************************************** def inception(x,d0_1,d1_1,d1_3,d2_1,d2_5,d3_1,scope = 'inception',reuse = None): with tf.variable_scope(scope,reuse = reuse): #slim.conv2d,slim.max_pool2d的默认参数都放在了slim的参数域里面 with slim.arg_scope([slim.conv2d,slim.max_pool2d],stride = 1,padding = 'SAME'): #第一个分支 with tf.variable_scope('branch0'): branch_0 = slim.conv2d(x,d0_1,[1,1],scope = 'conv_1x1') #第二个分支 with tf.variable_scope('branch1'): branch_1 = slim.conv2d(x,d1_1,[1,1],scope = 'conv_1x1') branch_1 = slim.conv2d(branch_1,d1_3,[3,3],scope = 'conv_3x3') #第三个分支 with tf.variable_scope('branch2'): branch_2 = slim.conv2d(x,d2_1,[1,1],scope = 'conv_1x1') branch_2 = slim.conv2d(branch_2,d2_5,[5,5],scope = 'conv_5x5') #第四个分支 with tf.variable_scope('branch3'): branch_3 = slim.max_pool2d(x,[3,3],scope = 'max_pool') branch_3 = slim.conv2d(branch_3,d3_1,[1,1],scope = 'conv_1x1') #连接 net = tf.concat([branch_0,branch_1,branch_2,branch_3],axis = -1) return net #*************************************** 使用inception构建GoogleNet ********************************************* #使用inception构建GoogleNet #INPUTS: # inputs-----------输入 # num_classes------输出类别数目 # is_trainning-----batch_norm层是否使用训练模式,batch_norm和is_trainning密切相关 # 当is_trainning = True 时候,它使用一个batch数据的平均移动,方差值 # 当is_trainning = Flase时候,它就使用固定的值 # verbos-----------控制打印信息 # reuse------------是否重复使用 #*************************************************************************************************************** def googlenet(inputs,num_classes,reuse = None,is_trainning = None,verbose = False): with slim.arg_scope([slim.batch_norm],is_training = is_trainning): with slim.arg_scope([slim.conv2d,slim.max_pool2d,slim.avg_pool2d], padding = 'SAME',stride = 1): net = inputs #googlnet的第一个块 with tf.variable_scope('block1',reuse = reuse): net = slim.conv2d(net,64,[5,5],stride = 2,scope = 'conv_5x5') if verbose: print('block1 output:{}'.format(net.shape)) #googlenet的第二个块 with tf.variable_scope('block2',reuse = reuse): net = slim.conv2d(net,64,[1,1],scope = 'conv_1x1') net = slim.conv2d(net,192,[3,3],scope = 'conv_3x3') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block2 output:{}'.format(net.shape)) #googlenet第三个块 with tf.variable_scope('block3',reuse = reuse): net = inception(net,64,96,128,16,32,32,scope = 'inception_1') net = inception(net,128,128,192,32,96,64,scope = 'inception_2') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block3 output:{}'.format(net.shape)) #googlenet第四个块 with tf.variable_scope('block4',reuse = reuse): net = inception(net,192,96,208,16,48,64,scope = 'inception_1') net = inception(net,160,112,224,24,64,64,scope = 'inception_2') net = inception(net,128,128,256,24,64,64,scope = 'inception_3') net = inception(net,112,144,288,24,64,64,scope = 'inception_4') net = inception(net,256,160,320,32,128,128,scope = 'inception_5') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block4 output:{}'.format(net.shape)) #googlenet第五个块 with tf.variable_scope('block5',reuse = reuse): net = inception(net,256,160,320,32,128,128,scope = 'inception1') net = inception(net,384,182,384,48,128,128,scope = 'inception2') net = slim.avg_pool2d(net,[2,2],stride = 2,scope = 'avg_pool') if verbose: print('block5 output:{}'.format(net.shape)) #最后一块 with tf.variable_scope('classification',reuse = reuse): net = slim.flatten(net) net = slim.fully_connected(net,num_classes,activation_fn = None,normalizer_fn = None,scope = 'logit') if verbose: print('classification output:{}'.format(net.shape)) return net #给卷积层设置默认的激活函数和batch_norm with slim.arg_scope([slim.conv2d],activation_fn = tf.nn.relu,normalizer_fn = slim.batch_norm) as sc: conv_scope = sc is_trainning_ph = tf.placeholder(tf.bool,name = 'is_trainning') #定义占位符 x_train_ph = tf.placeholder(shape = (None,x_train.shape[1],x_train.shape[2],x_train.shape[3]),dtype = tf.float32) x_test_ph = tf.placeholder(shape = (None,x_test.shape[1],x_test.shape[2],x_test.shape[3]),dtype = tf.float32) y_train_ph = tf.placeholder(shape = (None,),dtype = tf.int32) y_test_ph = tf.placeholder(shape = (None,),dtype = tf.int32) #实例化网络 with slim.arg_scope(conv_scope): train_out = googlenet(x_train_ph,10,is_trainning = is_trainning_ph,verbose = True) val_out = googlenet(x_test_ph,10,is_trainning = is_trainning_ph,reuse = True) #定义loss和acc with tf.variable_scope('loss'): train_loss = tf.losses.sparse_softmax_cross_entropy(labels = y_train_ph,logits = train_out,scope = 'train') val_loss = tf.losses.sparse_softmax_cross_entropy(labels = y_test_ph,logits = val_out,scope = 'val') with tf.name_scope('accurcay'): train_acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(train_out,axis = -1,output_type = tf.int32),y_train_ph),tf.float32)) val_acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(val_out,axis = -1,output_type = tf.int32),y_test_ph),tf.float32)) #定义训练op lr = 1e-2 opt = tf.train.MomentumOptimizer(lr,momentum = 0.9) #通过tf.get_collection获得所有需要更新的op update_op = tf.get_collection(tf.GraphKeys.UPDATE_OPS) #使用tesorflow控制流,先执行update_op再进行loss最小化 with tf.control_dependencies(update_op): train_op = opt.minimize(train_loss) #开启会话 sess = tf.Session() saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) batch_size = 64 #开始训练 for e in range(10000): batch1 = np.random.randint(0,50000,size = batch_size) t_x_train = x_train[batch1][:][:][:] t_y_train = y_train[batch1] batch2 = np.random.randint(0,10000,size = batch_size) t_x_test = x_test[batch2][:][:][:] t_y_test = y_test[batch2] sess.run(train_op,feed_dict = {x_train_ph:t_x_train, is_trainning_ph:True, y_train_ph:t_y_train}) # if(e%1000 == 999): # loss_train,acc_train = sess.run([train_loss,train_acc], # feed_dict = {x_train_ph:t_x_train, # is_trainning_ph:True, # y_train_ph:t_y_train}) # loss_test,acc_test = sess.run([val_loss,val_acc], # feed_dict = {x_test_ph:t_x_test, # is_trainning_ph:False, # y_test_ph:t_y_test}) # print('STEP{}:train_loss:{:.6f} train_acc:{:.6f} test_loss:{:.6f} test_acc:{:.6f}' # .format(e+1,loss_train,acc_train,loss_test,acc_test)) saver.save(sess = sess,save_path = 'VGGModel\model.ckpt') print('Train Done!!') print('--'*60) sess.close() ``` 报错信息是 ``` Using TensorFlow backend. block1 output:(?, 16, 16, 64) block2 output:(?, 8, 8, 192) block3 output:(?, 4, 4, 480) block4 output:(?, 2, 2, 832) block5 output:(?, 1, 1, 1024) classification output:(?, 10) Traceback (most recent call last): File "<ipython-input-1-6385a760fe16>", line 1, in <module> runfile('F:/Project/TEMP/LearnTF/GoogleNet/GoogleNet.py', wdir='F:/Project/TEMP/LearnTF/GoogleNet') File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "F:/Project/TEMP/LearnTF/GoogleNet/GoogleNet.py", line 177, in <module> y_train_ph:t_y_train}) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 900, in run run_metadata_ptr) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1316, in _do_run run_metadata) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1335, in _do_call raise type(e)(node_def, op, message) InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3] [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[?,32,32,3], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]] [[Node: gradients/block4/inception_4/concat_grad/ShapeN/_45 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_23694_gradients/block4/inception_4/concat_grad/ShapeN", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] ``` 看了好多遍都不是喂数据的问题,百度说是summary出了问题,可是我也没有summary呀,头晕~~~~
tensorflow在第一次运行Fashion MNIST会下载数据集,应该网络不好中断了报错不知咋办?
```**tensorflow在第一次运行Fashion MNIST会下载数据集,应该网络不好中断了报错不知咋办?** 代码如下: !/usr/bin/python _*_ coding: utf-8 -*- from __future__ import print_function import tensorflow as tf import matplotlib as mpl import matplotlib.pyplot as plt %matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time from tensorflow import keras print (tf.__version__) print (sys.version_info) for module in mpl ,np, pd, sklearn, tf, keras: print (module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data() x_valid,x_train = x_train_all[:5000],x_train_all[5000:] y_valid,y_train = y_train_all[:5000],y_train_all[5000:] print (x_valid.shape, y_valid.shape) print (x_train.shape, y_train.shape) print (x_test.shape, y_test.shape) ``` ``` 报错如下: 2.1.0 sys.version_info(major=2, minor=7, micro=12, releaselevel='final', serial=0) matplotlib 2.2.5 numpy 1.16.6 pandas 0.24.2 sklearn 0.20.4 tensorflow 2.1.0 tensorflow_core.python.keras.api._v2.keras 2.2.4-tf Traceback (most recent call last): File "/home/join/test_demo/test2.py", line 26, in <module> (x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data() File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/keras/data sets/fashion_mnist.py", line 59, in load_data imgpath.read(), np.uint8, offset=16).reshape(len(y_train), 28, 28) File "/usr/lib/python2.7/gzip.py", line 261, in read self._read(readsize) File "/usr/lib/python2.7/gzip.py", line 315, in _read self._read_eof() File "/usr/lib/python2.7/gzip.py", line 354, in _read_eof hex(self.crc))) IOError: CRC check failed 0xa445bb78 != 0xe7f80d 3fL ``` ```
keras中model.evaluate()报错:'numpy.float64' object is not iterable
``` x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.25) mean = x_train.mean(axis=0) std = x_train.std(axis=0) train_data = (x_train - mean) / std test_data = (x_test - mean) / std model = Sequential([Dense(64, input_shape=(6,)), Activation('relu'), Dense(32), Activation('relu'), Dense(1)]) sgd = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='mean_squared_error', optimizer=sgd) k = model.fit [loss, sgd] = model.evaluate(test_data, y_test, verbose=1) ``` 最后一步不知道哪出了问题。。test_data, y_test都是dataframe啊 TypeError Traceback (most recent call last) <ipython-input-29-3b0767a3c446> in <module> ----> 1 [loss, mse] = model.evaluate(test_data, y_test, verbose=1) TypeError: 'numpy.float64' object is not iterable
使用keras画出模型准确率评估的执行结果时出现:
建立好深度学习的模型后,使用反向传播法进行训练。 定义了训练方式: ``` model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy']) ``` 执行训练: ``` train_history =model.fit(x=x_Train_normalize, y=y_Train_OneHot,validation_split=0.2, epochs=10,batch_size=200,verbose=2) ``` 执行后出现: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571243584_952792.png) 建立show_train_history显示训练过程: ``` import matplotlib.pyplot as plt def show_train_history(train_history,train,validation): plt.plot(train_history.history[train]) plt.plot(train_history.history[validation]) plt.title('Train History') plt.ylabel(train) plt.xlabel('Epoch') plt.legend(['train','validation'],loc='upper left') plt.show() ``` 画出准确率执行结果: ``` show_train_history(train_history,'acc','val_acc') ``` 结果出现以下问题: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571243832_179270.png) 这是怎么回事呀? 求求大佬救救孩子555
keras 训练 IMDB数据 为什么预测的是正面情感?
学习 利用Keras中的IMDB数据集,对评论进行二分类,有个疑问是:为什么预测的是正面情感?代码如下: from keras.datasets import imdb from keras import models from keras import layers import numpy as np import matplotlib.pyplot as plt def vectorize_sequences(sequences, dimension=10000): results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1. print('i=',i,'results[i]=',results[i]) return results (train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000) '''word_index = imdb.get_word_index() reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) decoded_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]]) ''' x_train = vectorize_sequences(train_data) x_test = vectorize_sequences(test_data) y_train = np.asarray(train_labels).astype('float32') y_test = np.asarray(test_labels).astype('float32') model = models.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(10000,))) model.add(layers.Dense(16,activation='relu')) model.add(layers.Dense(1,activation='sigmoid')) x_val = x_train[:10000] partial_x_train = x_train[10000:] y_val = y_train[:10000] partial_y_train = y_train[10000:] model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512,validation_data=(x_val, y_val)) history_dict = history.history loss_value = history_dict['loss'] val_loss_value = history_dict['val_loss'] epochs = range(1,len(loss_value)+1) plt.plot(epochs, loss_value, 'bo', label='Trianing Loss') plt.plot(epochs, val_loss_value, 'b', label='Validation Loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show()
使用keras搭建黑体汉字单个字符识别网络val_acc=0.0002
这是我读入训练数据的过程,数据集是根据txt文件生成的单个汉字的图像(shape为64*64)788种字符(包括数字和X字符),每张图片的开头命名为在txt字典中的位置,作为标签, ``` def read_train_image(self, name): img = Image.open(name).convert('RGB') return np.array(img) def train(self): train_img_list = [] train_label_list = [] for file in os.listdir('train'): files_img_in_array = self.read_train_image(name='train/'+ file) train_img_list.append(files_img_in_array) # Image list add up train_label_list.append(int(file.split('_')[0])) # lable list addup train_img_list = np.array(train_img_list) train_label_list = np.array(train_label_list) train_label_list = np_utils.to_categorical(train_label_list, self.count) train_img_list = train_img_list.astype('float32') train_img_list /= 255 ``` 训练下来,虽然train_acc达到0.99,但是验证accuracy一直都等于0. 下面是网络结构: ``` model = Sequential() #创建第一个卷积层。 model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=(64,64,3),kernel_regularizer=l2(0.0001))) model.add(BatchNormalization(axis=3)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) #创建第二个卷积层。 model.add(Convolution2D(64, 3, 3, border_mode='valid',kernel_regularizer=l2(0.0001))) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) #创建第三个卷积层。 model.add(Convolution2D(128, 3, 3, border_mode='valid',kernel_regularizer=l2(0.0001))) model.add(BatchNormalization()) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) # 创建全连接层。 model.add(Flatten()) model.add(Dense(128, init= 'he_normal')) model.add(BatchNormalization()) model.add(Activation('relu')) #创建输出层,使用 Softmax函数输出属于各个字符的概率值。 model.add(Dense(output_dim=self.count, init= 'he_normal')) model.add(Activation('softmax')) #设置神经网络中的损失函数和优化算法。 model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy']) #开始训练,并设置批尺寸和训练的步数。 model.fit( train_img_list, train_label_list, epochs=500, batch_size=128, validation_split=0.2, verbose=1, shuffle= False, ) ``` 大概结构是这样,十几轮后训练集acc达到了0.99.验证集val_acc为0.网上说这种情况大概是过拟合了,希望高手指点一下。
使用keras进行分类问题时,验证集loss,accuracy 显示0.0000e+00,但是最后画图像时能显示出验证曲线
data_train, data_test, label_train, label_test = train_test_split(data_all, label_all, test_size= 0.2, random_state = 1) data_train, data_val, label_train, label_val = train_test_split(data_train,label_train, test_size = 0.25) data_train = np.asarray(data_train, np.float32) data_test = np.asarray(data_test, np.float32) data_val = np.asarray(data_val, np.float32) label_train = np.asarray(label_train, np.int32) label_test = np.asarray(label_test, np.int32) label_val = np.asarray(label_val, np.int32) training = model.fit_generator(datagen.flow(data_train, label_train_binary, batch_size=200,shuffle=True), validation_data=(data_val,label_val_binary), samples_per_epoch=len(data_train)*8, nb_epoch=30, verbose=1) def plot_history(history): plt.plot(training.history['acc']) plt.plot(training.history['val_acc']) plt.title('model accuracy') plt.xlabel('epoch') plt.ylabel('accuracy') plt.legend(['acc', 'val_acc'], loc='lower right') plt.show() plt.plot(training.history['loss']) plt.plot(training.history['val_loss']) plt.title('model loss') plt.xlabel('epoch') plt.ylabel('loss') plt.legend(['loss', 'val_loss'], loc='lower right') plt.show() plot_history(training) ![图片说明](https://img-ask.csdn.net/upload/201812/10/1544423669_112599.jpg)![图片说明](https://img-ask.csdn.net/upload/201812/10/1544423681_598605.jpg)
keras 训练网络时出现ValueError
rt 使用keras中的model.fit函数进行训练时出现错误:ValueError: None values not supported. 错误信息如下: ``` File "C:/Users/Desktop/MNISTpractice/mnist.py", line 93, in <module> model.fit(x_train,y_train, epochs=2, callbacks=callback_list,validation_data=(x_val,y_val)) File "C:\Anaconda3\lib\site-packages\keras\engine\training.py", line 1575, in fit self._make_train_function() File "C:\Anaconda3\lib\site-packages\keras\engine\training.py", line 960, in _make_train_function loss=self.total_loss) File "C:\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "C:\Anaconda3\lib\site-packages\keras\optimizers.py", line 432, in get_updates m_t = (self.beta_1 * m) + (1. - self.beta_1) * g File "C:\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py", line 820, in binary_op_wrapper y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y") File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 639, in convert_to_tensor as_ref=False) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 704, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 113, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 102, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 360, in make_tensor_proto raise ValueError("None values not supported.") ValueError: None values not supported. ```
用keras做图像2分类,结果总是所有test样本归为其中一类?
用keras做图像2分类,label非平衡,约1:10,代码如下: data = np.load('D:/a.npz') image_data, label_data= data['image'], data['label'] 由于数据不平衡,用分层K折拆分为3组, train_x=image_data[train] test_x=image_data[test] train_y=label_data[train] test_y=label_data[test] train_x = np.array(train_x) test_x = np.array(test_x) train_x = train_x.reshape(train_x.shape[0],1,28,28) test_x = test_x.reshape(test_x.shape[0],1,28,28) train_x = train_x.astype('float32') test_x = test_x.astype('float32') train_x /=255 test_x /=255 train_y = np.array(train_y) test_y = np.array(test_y) 然后用keras的序贯模型 model.compile(optimizer='rmsprop',loss="binary_crossentropy",metrics=['acc']) model.fit(train_x, train_y,batch_size=128, class_weight = 'auto', epochs=10,verbose=1,validation_data=(test_x, test_y)) from sklearn.metrics import confusion_matrix y_pred_model = model.predict_proba(test_x) C=confusion_matrix(test_y,y_pred_model) print(C) 结果总是所有test样本归为一类, 推测可能是不平衡,模型认为最优化就是将所有样本都认作为较大类,但是将2分类label改为1:1后,结果仍然是所有test都归为一类: [[22 0] [21 0]] 请教这是啥原因?代码错在哪?
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私有的数
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
此博客仅为我业余记录文章所用,发布到此,仅供网友阅读参考,如有侵权,请通知我,我会删掉。 补充 有不少读者留言说本文章没有用,因为天气预报直接打开手机就可以收到了,为何要多此一举发送到邮箱呢!!!那我在这里只能说:因为你没用,所以你没用!!! 这里主要介绍的是思路,不是天气预报!不是天气预报!!不是天气预报!!!天气预报只是用于举例。请各位不要再刚了!!! 下面是我会用到的两个场景: 每日下
死磕YOLO系列,YOLOv1 的大脑、躯干和手脚
YOLO 是我非常喜欢的目标检测算法,堪称工业级的目标检测,能够达到实时的要求,它帮我解决了许多实际问题。 这就是 YOLO 的目标检测效果。它定位了图像中物体的位置,当然,也能预测物体的类别。 之前我有写博文介绍过它,但是每次重新读它的论文,我都有新的收获,为此我准备写一个系列的文章来详尽分析它。这是第一篇,从它的起始 YOLOv1 讲起。 YOLOv1 的论文地址:https://www.c
知乎高赞:中国有什么拿得出手的开源软件产品?(整理自本人原创回答)
知乎高赞:中国有什么拿得出手的开源软件产品? 在知乎上,有个问题问“中国有什么拿得出手的开源软件产品(在 GitHub 等社区受欢迎度较好的)?” 事实上,还不少呢~ 本人于2019.7.6进行了较为全面的 回答 - Bravo Yeung,获得该问题下回答中得最高赞(236赞和1枚专业勋章),对这些受欢迎的 Github 开源项目分类整理如下: 分布式计算、云平台相关工具类 1.SkyWalk
20行Python代码爬取王者荣耀全英雄皮肤
引言 王者荣耀大家都玩过吧,没玩过的也应该听说过,作为时下最火的手机MOBA游戏,咳咳,好像跑题了。我们今天的重点是爬取王者荣耀所有英雄的所有皮肤,而且仅仅使用20行Python代码即可完成。 准备工作 爬取皮肤本身并不难,难点在于分析,我们首先得得到皮肤图片的url地址,话不多说,我们马上来到王者荣耀的官网: 我们点击英雄资料,然后随意地选择一位英雄,接着F12打开调试台,找到英雄原皮肤的图片
简明易理解的@SpringBootApplication注解源码解析(包含面试提问)
欢迎关注文章系列 ,关注我 《提升能力,涨薪可待》 《面试知识,工作可待》 《实战演练,拒绝996》 欢迎关注我博客,原创技术文章第一时间推出 也欢迎关注公 众 号【Ccww笔记】,同时推出 如果此文对你有帮助、喜欢的话,那就点个赞呗,点个关注呗! 《提升能力,涨薪可待篇》- @SpringBootApplication注解源码解析 一、@SpringBootApplication 的作用是什
西游记团队中如果需要裁掉一个人,会先裁掉谁?
2019年互联网寒冬,大批企业开始裁员,下图是网上流传的一张截图: 裁员不可避免,那如何才能做到不管大环境如何变化,自身不受影响呢? 我们先来看一个有意思的故事,如果西游记取经团队需要裁员一名,会裁掉谁呢,为什么? 西游记团队组成: 1.唐僧 作为团队teamleader,有很坚韧的品性和极高的原则性,不达目的不罢休,遇到任何问题,都没有退缩过,又很得上司支持和赏识(直接得到唐太宗的任命,既给
Python语言高频重点汇总
Python语言高频重点汇总 GitHub面试宝典仓库——点这里跳转 文章目录Python语言高频重点汇总**GitHub面试宝典仓库——点这里跳转**1. 函数-传参2. 元类3. @staticmethod和@classmethod两个装饰器4. 类属性和实例属性5. Python的自省6. 列表、集合、字典推导式7. Python中单下划线和双下划线8. 格式化字符串中的%和format9.
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外
ES6基础-ES6的扩展
进行对字符串扩展,正则扩展,数值扩展,函数扩展,对象扩展,数组扩展。 开发环境准备: 编辑器(VS Code, Atom,Sublime)或者IDE(Webstorm) 浏览器最新的Chrome 字符串的扩展: 模板字符串,部分新的方法,新的unicode表示和遍历方法: 部分新的字符串方法 padStart,padEnd,repeat,startsWith,endsWith,includes 字
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip install selenium -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
你知道的越多,你不知道的越多 点赞再看,养成习惯 本文 GitHub https://github.com/JavaFamily 已收录,有一线大厂面试点思维导图,也整理了很多我的文档,欢迎Star和完善,大家面试可以参照考点复习,希望我们一起有点东西。 前前言 为啥今天有个前前言呢? 因为你们的丙丙啊,昨天有牌面了哟,直接被微信官方推荐,知乎推荐,也就仅仅是还行吧(心里乐开花)
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca
Python爬虫精简步骤1 获取数据
爬虫的工作分为四步: 1.获取数据。爬虫程序会根据我们提供的网址,向服务器发起请求,然后返回数据。 2.解析数据。爬虫程序会把服务器返回的数据解析成我们能读懂的格式。 3.提取数据。爬虫程序再从中提取出我们需要的数据。 4.储存数据。爬虫程序把这些有用的数据保存起来,便于你日后的使用和分析。 这一篇的内容就是:获取数据。 首先,我们将会利用一个强大的库——requests来获取数据。 在电脑上安装
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东
破14亿,Python分析我国存在哪些人口危机!
2020年1月17日,国家统计局发布了2019年国民经济报告,报告中指出我国人口突破14亿。 猪哥的朋友圈被14亿人口刷屏,但是很多人并没有看到我国复杂的人口问题:老龄化、男女比例失衡、生育率下降、人口红利下降等。 今天我们就来分析一下我们国家的人口数据吧! 更多有趣分析教程,扫描下方二维码关注vx公号「裸睡的猪」 即可查看! 一、背景 1.人口突破14亿 2020年1月17日,国家统计局发布
web前端javascript+jquery知识点总结
Javascript javascript 在前端网页中占有非常重要的地位,可以用于验证表单,制作特效等功能,它是一种描述语言,也是一种基于对象(Object)和事件驱动并具有安全性的脚本语言 ,语法同java类似,是一种解释性语言,边执行边解释。 JavaScript的组成: ECMAScipt 用于描述: 语法,变量和数据类型,运算符,逻辑控制语句,关键字保留字,对象。 浏览器对象模型(Br
Qt实践录:开篇
本系列文章介绍笔者的Qt实践之路。 背景 笔者首次接触 Qt 大约是十多年前,当时试用了 Qt ,觉得不如 MFC 好用。现在 Qt 的 API、文档等都比较完善,在年初决定重新拾起,正所谓技多不压身,将 Qt 当为一种谋生工具亦未尝不可。利用春节假期的集中时间,快速专攻一下。 本系列名为“Qt实践”,故不是教程,笔者对 Qt 的定位是“使用”,可以帮助快速编写日常的工具,如串口、网络等。所以不
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ......
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。   再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。   下文是原回答,希望能对你能有所启发。   如果我说,这个世界上人真的分三六九等,
为什么听过很多道理,依然过不好这一生?
记录学习笔记是一个重要的习惯,不希望学习过的东西成为过眼云烟。做总结的同时也是一次复盘思考的过程。 本文是根据阅读得到 App上《万维钢·精英日课》部分文章后所做的一点笔记和思考。学习是一个系统的过程,思维模型的建立需要相对完整的学习和思考过程。以下观点是在碎片化阅读后总结的一点心得总结。
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计
一条链接即可让黑客跟踪你的位置! | Seeker工具使用
搬运自:冰崖的部落阁(icecliffsnet) 严正声明:本文仅限于技术讨论,严禁用于其他用途。 请遵守相对应法律规则,禁止用作违法途径,出事后果自负! 上次写的防社工文章里边提到的gps定位信息(如何防止自己被社工或人肉) 除了主动收集他人位置信息以外,我们还可以进行被动收集 (没有技术含量) Seeker作为一款高精度地理位置跟踪工具,同时也是社交工程学(社会工程学)爱好者...
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧...... 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升
前端JS初级面试题二 (。•ˇ‸ˇ•。)老铁们!快来瞧瞧自己都会了么
1. 传统事件绑定和符合W3C标准的事件绑定有什么区别? 传统事件绑定 &lt;div onclick=""&gt;123&lt;/div&gt; div1.onclick = function(){}; &lt;button onmouseover=""&gt;&lt;/button&gt; 注意: 如果给同一个元素绑定了两次或多次相同类型的事件,那么后面的绑定会覆盖前面的绑定 (不支持DOM事...
相关热词 c#如何定义数组列表 c#倒序读取txt文件 java代码生成c# c# tcp发送数据 c#解决时间格式带星期 c#类似hashmap c#设置istbox的值 c#获取多线程返回值 c# 包含数字 枚举 c# timespan
立即提问