keras中model.evaluate()报错:'numpy.float64' object is not iterable
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.25)
mean = x_train.mean(axis=0)
std = x_train.std(axis=0)
train_data = (x_train - mean) / std
test_data = (x_test - mean) / std
model = Sequential([Dense(64, input_shape=(6,)), Activation('relu'),
                    Dense(32), Activation('relu'),
                    Dense(1)])

sgd = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)
k = model.fit
[loss, sgd] = model.evaluate(test_data, y_test, verbose=1)

最后一步不知道哪出了问题。。test_data, y_test都是dataframe啊

TypeError Traceback (most recent call last)
in
----> 1 [loss, mse] = model.evaluate(test_data, y_test, verbose=1)

TypeError: 'numpy.float64' object is not iterable

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
keras 运行cnn时报内存错误
如题,我早先自学的是tf,昨天入了一下keras的坑,没用服务器,用我这个丐版的联想本装了一个基于theano的keras,一开始跑了一个全连接的神经网络,没啥问题。然后又做了一个很小的cnn,(代码如下),能够用 model.summary()输出网络的结构,但是运行起来就会弹出信息框报错: 代码: ``` import keras import numpy as np from keras.models import load_model input1=keras.layers.Input(shape=(25,)) x=keras.layers.Reshape([5,5,1])(input1) x1=keras.layers.Conv2D(filters=2,kernel_size=(2,2),strides=(1,1),padding='valid',activation='elu')(x) x2=keras.layers.MaxPooling2D(pool_size=(2,2),strides=(1,1),padding='valid')(x1) x3=keras.layers.Conv2D(filters=4,kernel_size=(2,2),strides=(1,1),padding='valid',activation='elu')(x2) x4=keras.layers.AveragePooling2D(pool_size=(2,2),strides=(1,1),padding='valid')(x3) x5=keras.layers.Reshape([4*4*2,])(x1) xx=keras.layers.Dense(1,activation='elu')(x5) model=keras.models.Model(inputs=input1,outputs=xx) model.summary() model.compile(loss='mse', optimizer='sgd') def data(): data=np.random.randint(0,2,[1,25]) return(data) def num(data): data=np.reshape(data,[25]) sum_=0 for i in data: sum_=sum_+i if sum_>10: result=[[1]] else: result=[[0]] return(result) while True: for i in range(100): x=data() y=num(x) cost = model.train_on_batch([x], [y]) print(i) x=data() y=num(x) cost = model.evaluate(x, y) print('loss=',cost) x=data() y=num(x) print('x=',x) print('y=',y) Y_pred = model.predict(x) print(Y_pred) words=input('continue??\::') if words=='n': break ``` 可以输出模型的结构![图片说明](https://img-ask.csdn.net/upload/202001/07/1578376564_807468.png) 但是再往下运行,就会弹出信息框报错: ![图片说明](https://img-ask.csdn.net/upload/202001/07/1578376772_416127.png) 请问各位高手有何高见 我的电脑是xp系统,32位,内存不到1G(老掉牙的耍着玩),装的是python 2.7.15,numpy(1.16.6),scipy(1.2.2),theano(1.0.4),keras(2.3.1) 勿喷,一般都是在服务器上写tf,这台电脑纯属娱乐。。 求教求教。。。
keras model.fit函数报错,输入参数shape维度不正确,如何修正
使用函数 ``` model.fit(x=images, y=labels, validation_split=0.1, batch_size=batch_size, epochs=n_epochs, callbacks=callbacks, shuffle=True) ``` 由于我的训练集中image是灰色图片,所以images的shape为(2, 28, 28),导致报错Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (2, 28, 28) ,请问该如何处理
keras报错:All inputs to the layer should be tensors.
深度学习小白,初次使用keras构建网络,遇到问题向各位大神请教: ``` from keras.models import Sequential from keras.layers import Embedding from keras.layers import Dense, Activation from keras.layers import Concatenate from keras.layers import Add 构建了一些嵌入层_ model_store = Embedding(1115, 10) model_dow = Embedding(7, 6) model_day = Embedding(31, 10) model_month = Embedding(12, 6) model_year = Embedding(3, 2) model_promotion = Embedding(2, 1) model_state = Embedding(12, 6) 将这些嵌入层连接起来 output_embeddings = [model_store, model_dow, model_day, model_month, model_year, model_promotion, model_state] output_model = Concatenate()(output_embeddings) ``` 运行报错: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) D:\python\lib\site-packages\keras\engine\base_layer.py in assert_input_compatibility(self, inputs) 278 try: --> 279 K.is_keras_tensor(x) 280 except ValueError: D:\python\lib\site-packages\keras\backend\tensorflow_backend.py in is_keras_tensor(x) 473 raise ValueError('Unexpectedly found an instance of type `' + --> 474 str(type(x)) + '`. ' 475 'Expected a symbolic tensor instance.') ValueError: Unexpectedly found an instance of type `<class 'keras.layers.embeddings.Embedding'>`. Expected a symbolic tensor instance. During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-32-8e957c4150f0> in <module> ----> 1 output_model = Concatenate()(output_embeddings) D:\python\lib\site-packages\keras\engine\base_layer.py in __call__(self, inputs, **kwargs) 412 # Raise exceptions in case the input is not compatible 413 # with the input_spec specified in the layer constructor. --> 414 self.assert_input_compatibility(inputs) 415 416 # Collect input shapes to build layer. D:\python\lib\site-packages\keras\engine\base_layer.py in assert_input_compatibility(self, inputs) 283 'Received type: ' + 284 str(type(x)) + '. Full input: ' + --> 285 str(inputs) + '. All inputs to the layer ' 286 'should be tensors.') 287 ValueError: Layer concatenate_5 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.layers.embeddings.Embedding'>. Full input: [<keras.layers.embeddings.Embedding object at 0x000001C82EA1EC88>, <keras.layers.embeddings.Embedding object at 0x000001C82EA1EB38>, <keras.layers.embeddings.Embedding object at 0x000001C82EA1EB00>, <keras.layers.embeddings.Embedding object at 0x000001C82E954240>, <keras.layers.embeddings.Embedding object at 0x000001C82E954198>, <keras.layers.embeddings.Embedding object at 0x000001C82E9542E8>, <keras.layers.embeddings.Embedding object at 0x000001C82E954160>]. All inputs to the layer should be tensors. 报错提示是:所有层的输入应该为张量,请问应该怎么修改呢?麻烦了!
keras.util.sequence + fit_generator 如何实现多输出model
输入输出的形式是下面这样: ``` model = Model(inputs=input_img, outputs=[mask,net2_opt,net3_opt]) ``` 由于sequence要求一定要返回一个两个参数的远足,所以生成器的_getitem_的实现如下: ``` class DataGenerator(keras.utils.Sequence): def __getitem__(self, index): #生成每个batch数据,这里就根据自己对数据的读取方式进行发挥了 # 生成batch_size个索引 batch_indexs = self.indexes[index*self.batch_size:(index+1)*self.batch_size] # 根据索引获取datas集合中的数据 batch_datas = [self.datas[k] for k in batch_indexs] # 生成数据 images, masks,heatmaps,xyzs = self.data_generation(batch_datas) return (images, [masks,heatmaps,xyzs]) ``` output中的mask并不能与getitem的返回值匹配。 会报错: ValueError: Error when checking target: expected conv_1x1_x14 to have 4 dimensions, but got array with shape (3,1) 请问,是不是keras.util.sequence不能实现多输出问题?
求解报错TypeError: slice indices must be integers or None or have an __index__ method
运行环境 pycharm2019.2.3 python 3.7 TensorFlow 2.0 代码如下 ``` import tensorflow as tf import numpy as np class DataLoader(): def __init__(self): path = tf.keras.utils.get_file('nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt') with open(path, encoding='utf-8') as f: self.raw_text = f.read().lower() self.chars = sorted(list(set(self.raw_text))) self.char_indices = dict((c, i) for i, c in enumerate(self.chars)) self.indices_char = dict((i, c) for i, c in enumerate(self.chars)) self.text = [self.char_indices[c] for c in self.raw_text] def get_batch(self, seq_length, batch_size): seq = [] next_char = [] for i in range(batch_size): index = np.random.randint(0, len(self.text) - seq_length) seq.append(self.text[index:index + seq_length]) next_char.append(self.text[index + seq_length]) return np.array(seq), np.array(next_char) # [batch_size, seq_length], [num_batch] class RNN(tf.keras.Model): def __init__(self, num_chars, batch_size, seq_length): super().__init__() self.num_chars = num_chars self.seq_length = seq_length self.batch_size = batch_size self.cell = tf.keras.layers.LSTMCell(units=256) self.dense = tf.keras.layers.Dense(units=self.num_chars) def call(self, inputs, from_logits=False): inputs = tf.one_hot(inputs, depth=self.num_chars) # [batch_size, seq_length, num_chars] state = self.cell.get_initial_state(batch_size=self.batch_size, dtype=tf.float32) for t in range(self.seq_length): output, state = self.cell(inputs[:, t, :], state) logits = self.dense(output) if from_logits: return logits else: return tf.nn.softmax(logits) num_batches = 10 seq_length = 40 batch_size = 50 learning_rate = 1e-3 data_loader = DataLoader() model = RNN(num_chars=len(data_loader.chars), batch_size=batch_size, seq_length=seq_length) optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) for batch_index in range(num_batches): X, y = data_loader.get_batch(seq_length, batch_size) with tf.GradientTape() as tape: y_pred = model(X) loss = tf.keras.losses.sparse_categorical_crossentropy(y_true=y, y_pred=y_pred) loss = tf.reduce_mean(loss) print("batch %d: loss %f" % (batch_index, loss.numpy())) grads = tape.gradient(loss, model.variables) optimizer.apply_gradients(grads_and_vars=zip(grads, model.variables)) def predict(self, inputs, temperature=1.): batch_size, _ = tf.shape(inputs) logits = self(inputs, from_logits=True) prob = tf.nn.softmax(logits / temperature).numpy() return np.array([np.random.choice(self.num_chars, p=prob[i, :]) for i in range(batch_size.numpy())]) X_, _ = data_loader.get_batch(seq_length, 1) for diversity in [0.2, 0.5, 1.0, 1.2]: X = X_ print("diversity %f:" % diversity) for t in range(400): y_pred = model.predict(X, diversity) print(data_loader.indices_char[y_pred[0]], end='', flush=True) X = np.concatenate([X[:, 1:], np.expand_dims(y_pred, axis=1)], axis=-1) print("\n") ``` 报错: ``` Python 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] on win32 runfile('F:/pyth/pj3/study3.py', wdir='F:/pyth/pj3') batch 0: loss 4.041161 batch 1: loss 4.026710 batch 2: loss 4.005230 batch 3: loss 3.983728 batch 4: loss 3.920999 batch 5: loss 3.864793 batch 6: loss 3.644211 batch 7: loss 3.375458 batch 8: loss 3.620051 batch 9: loss 3.382381 diversity 0.200000: Traceback (most recent call last): File "<input>", line 1, in <module> File "D:\Program Files\JetBrains\PyCharm 2019.2.3\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "D:\Program Files\JetBrains\PyCharm 2019.2.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "F:/pyth/pj3/study3.py", line 97, in <module> y_pred = model.predict(X, diversity) File "D:\ProgramData\Anaconda3\envs\kingtf2\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 909, in predict use_multiprocessing=use_multiprocessing) File "D:\ProgramData\Anaconda3\envs\kingtf2\lib\site-packages\tensorflow_core\python\keras\engine\training_arrays.py", line 722, in predict callbacks=callbacks) File "D:\ProgramData\Anaconda3\envs\kingtf2\lib\site-packages\tensorflow_core\python\keras\engine\training_arrays.py", line 362, in model_iteration batch_ids = index_array[batch_start:batch_end] TypeError: slice indices must be integers or None or have an __index__ method ``` 可能有问题的地方: ``` for diversity in [0.2, 0.5, 1.0, 1.2]: X = X_ print("diversity %f:" % diversity) for t in range(400): y_pred = model.predict(X, diversity) print(data_loader.indices_char[y_pred[0]], end='', flush=True) X = np.concatenate([X[:, 1:], np.expand_dims(y_pred, axis=1)], axis=-1) print("\n") ``` ``` def predict(self, inputs, temperature=1.): batch_size, _ = tf.shape(inputs) logits = self(inputs, from_logits=True) prob = tf.nn.softmax(logits / temperature).numpy() return np.array([np.random.choice(self.num_chars, p=prob[i, :]) for i in range(batch_size.numpy())]) ```
我的keras的model.fit写在一个loop里,callback每一个epoch会生成一个events文件,如何处理这种问题?
if resume: # creates a generic neural network architecture model = Sequential() # hidden layer takes a pre-processed frame as input, and has 200 units model.add(Dense(units=200,input_dim=80*80, activation='relu', kernel_initializer='glorot_uniform')) # output layer model.add(Dense(units=1, activation='sigmoid', kernel_initializer='RandomNormal')) # compile the model using traditional Machine Learning losses and optimizers model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) #print model model.summary() if os.path.isfile('Basic_Rl_weights.h5'): #load pre-trained model weight print("loading previous weights") model.load_weights('Basic_Rl_weights.h5') else : # creates a generic neural network architecture model = Sequential() # hidden layer takes a pre-processed frame as input, and has 200 units model.add(Dense(units=200,input_dim=80*80, activation='relu', kernel_initializer='glorot_uniform')) # output layer model.add(Dense(units=1, activation='sigmoid', kernel_initializer='RandomNormal')) # compile the model using traditional Machine Learning losses and optimizers model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) #print model model.summary() #save model # model.save_weights('my_model_weights.h5') log_dir = './log' + datetime.now().strftime("%Y%m%d-%H%M%S") + "/" callbacks = callbacks.TensorBoard(log_dir=log_dir, histogram_freq=0, write_graph=True, write_images=True) # gym initialization env = gym.make("Pong-v0") observation = env.reset() prev_x = None # used in computing the difference frame running_reward = None # initialization of variables used in the main loop x_train, y_train, rewards = [],[],[] reward_sum = 0 episode_number = 0 # main loop while True: if render : env.render() # preprocess the observation, set input as difference between images cur_x = prepro(observation) # i=np.expand_dims(cur_x,axis=0) # print(i.shape) # print(cur_x.shape) if prev_x is not None : x = cur_x - prev_x else: x = np.zeros(Input_dim) # print(x.shape) # print(np.expand_dims(cur_x,axis=0).shape) prev_x = cur_x # forward the policy network and sample action according to the proba distribution # two ways to calculate returned probability # print(x.shape) prob = model.predict(np.expand_dims(x, axis=1).T) # aprob = model.predict(np.expand_dims(x, axis=1).T) if np.random.uniform() < prob: action = action_up else : action = action_down # 0 and 1 labels( a fake label in order to achive back propagation algorithm) if action == 2: y = 1 else: y = 0 # log the input and label to train later x_train.append(x) y_train.append(y) # do one step in our environment observation, reward, done, info = env.step(action) rewards.append(reward) reward_sum += reward # end of an episode if done: print('At the end of episode', episode_number, 'the total reward was :', reward_sum) # increment episode number episode_number += 1 # training # history = LossHistory() model.fit(x=np.vstack(x_train), y=np.vstack(y_train), verbose=1, sample_weight=discount_rewards(rewards), callbacks=[callbacks]) if episode_number % 100 == 0: model.save_weights('Basic_Rl_weights' + datetime.now().strftime("%Y%m%d-%H%M%S") + '.h5') # Log the reward running_reward = reward_sum if running_reward is None else running_reward * 0.99 + reward_sum * 0.01 # if episode_number % 10 == 0: tflog('running_reward', running_reward, custom_dir=log_dir) # Reinitialization x_train, y_train, rewards = [],[],[] observation = env.reset() reward_sum = 0 prev_x = None ``` ```
做keras的可视化时utils.apply_modifications出错
**#(1)用mnist文件生成了model.h5文件:** import numpy as np import keras from keras.datasets import mnist from keras.models import Sequential,Model from keras.layers import Dense,Dropout,Flatten,Activation,Input from keras.layers import Conv2D,MaxPooling2D from keras import backend as K batch_size=128 num_classes=10 epochs=5 #定义图像的长宽 img_rows,img_cols=28,28 #加载mnist数据集 (x_train,y_train),(x_test,y_test)=mnist.load_data() #定义图像的格式 x_train=x_train.reshape(x_train.shape[0],img_rows,img_cols,1) x_test=x_test.reshape(x_test.shape[0],img_rows,img_cols,1) input_shape=(img_rows,img_cols,1) x_train=x_train.astype('float32') x_test=x_test.astype('float32') x_train/=255 x_test/=255 print('x_train shape:',x_train.shape) print(x_train.shape[0],'train samples') print(x_test.shape[0],'test samples') y_train=keras.utils.to_categorical(y_train,num_classes) y_test=keras.utils.to_categorical(y_test,num_classes) #开始DNN网络 model=Sequential() model.add(Conv2D(32,kernel_size=(3,3),activation='relu',input_shape=input_shape)) model.add(Conv2D(54,(3,3),activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes,activation='softmax',name='preds')) model.compile(loss=keras.losses.categorical_crossentropy,optimizer=keras.optimizers.Adam(),metrics=['accuracy']) model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(x_test,y_test)) score=model.evaluate(x_test,y_test,verbose=0) print('Test loss:',score[0]) print('Test accuracy:',score[1]) model.save('model.h5') **#(2)用生成的mnist文件做测试:** from keras.models import load_model from vis.utils import utils from keras import activations model=load_model('model.h5') layer_idx=utils.find_layer_idx(model,'preds') model.layers[layer_idx].activation=activations.linear model = utils.apply_modifications(model) 报错:FileNotFoundError: [WinError 3] 系统找不到指定的路径。: '/tmp/curzzxs_.h5'
关于Colab上Keras模型转TPU模型的问题
使用TPU加速训练,将Keras模型转TPU模型时报错,如图![图片说明](https://img-ask.csdn.net/upload/202001/14/1578998736_238721.png) 关键代码如下 引用库: ``` %tensorflow_version 1.x import json import os import numpy as np import tensorflow as tf from tensorflow.python.keras.applications import resnet from tensorflow.python.keras import callbacks from tensorflow.python.keras.preprocessing.image import ImageDataGenerator import matplotlib.pyplot as plt ``` 转换TPU模型代码如下 ``` # This address identifies the TPU we'll use when configuring TensorFlow. TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR'] tf.logging.set_verbosity(tf.logging.INFO) self.model = tf.contrib.tpu.keras_to_tpu_model(self.model, strategy=tf.contrib.tpu.TPUDistributionStrategy(tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER))) self.model = resnet50.ResNet50(weights=None, input_shape=dataset.input_shape, classes=num_classes) ```
keras结果ACC: 1.0000 Recall: 1.0000 F1-score: 1.0000 Precesion: 1.0000的原因?
用keras做的图像2分类,仅仅跑了5个epoch, 结果: [[205 0] [ 0 28]] keras的AUC为: 1.0 AUC: 1.0000 ACC: 1.0000 Recall: 1.0000 F1-score: 1.0000 Precesion: 1.0000 代码: data = np.load('.npz') image_data, label_data= data['image'], data['label'] skf = StratifiedKFold(n_splits=3, shuffle=True) for train, test in skf.split(image_data, label_data): train_x=image_data[train] test_x=image_data[test] train_y=label_data[train] test_y=label_data[test] train_x = np.array(train_x) test_x = np.array(test_x) train_x = train_x.reshape(train_x.shape[0],1,28,28) test_x = test_x.reshape(test_x.shape[0],1,28,28) train_x = train_x.astype('float32') test_x = test_x.astype('float32') train_x /=255 test_x /=255 train_y = np.array(train_y) test_y = np.array(test_y) model.compile(optimizer='rmsprop',loss="binary_crossentropy",metrics=["accuracy"]) model.fit(train_x, train_y,batch_size=64,epochs=5,verbose=1,validation_data=(test_x, test_y)]) 从结果看,代码存在离谱的错误,请教各位专家,错在哪?谢谢
keras model.predict_classes() 问题
keras model.predict_classes()只能适用于sequential model, 对于Model 模型(functional model)该怎么达到类似的输出类别的效果 e.g results = list(model.predict_classes(data_test,verbose = 1)) score = accuracy_score(label_test,results) 这种在sequential model上可行的方法,如何在functional model 达到相同的效果
在Spyder界面中使用tensorflow进行fashion_mnist数据集学习,结果loss为非数,并且准确率一直未变
1.建立了一个3个全连接层的神经网络; 2.代码如下: ``` import matplotlib as mpl import matplotlib.pyplot as plt #%matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time import tensorflow as tf from tensorflow import keras print(tf.__version__) print(sys.version_info) for module in mpl, np, sklearn, tf, keras: print(module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data() x_valid, x_train = x_train_all[:5000], x_train_all[5000:] y_valid, y_train = y_train_all[:5000], y_train_all[5000:] #tf.keras.models.Sequential model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape= [28,28])) model.add(keras.layers.Dense(300, activation="relu")) model.add(keras.layers.Dense(100, activation="relu")) model.add(keras.layers.Dense(10,activation="softmax")) ###sparse为最后输出为index类型,如果为one hot类型,则不需加sparse model.compile(loss = "sparse_categorical_crossentropy",optimizer = "sgd", metrics = ["accuracy"]) #model.layers #model.summary() history = model.fit(x_train, y_train, epochs=10, validation_data=(x_valid,y_valid)) ``` 3.输出结果: ``` runfile('F:/new/new world/deep learning/tensorflow/ex2/tf_keras_classification_model.py', wdir='F:/new/new world/deep learning/tensorflow/ex2') 2.0.0 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0) matplotlib 3.1.1 numpy 1.16.5 sklearn 0.21.3 tensorflow 2.0.0 tensorflow_core.keras 2.2.4-tf Train on 55000 samples, validate on 5000 samples Epoch 1/10 WARNING:tensorflow:Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: 55000/55000 [==============================] - 3s 58us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 2/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 3/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 4/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 5/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 6/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 7/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 8/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 9/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 10/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 ```
python类中call和__call__的区别?
在使用tensorflow低阶API实现线性回归时,模型如下定义: ```python class Model(object): def __init__(self): self.w = tf.random.uniform([1]) self.b = tf.random.uniform([1]) def __call__(self,x): return self.w * x + self.b ``` 但在使用keras时如下写 ``` class Model(tf.keras.Model): def __init__(self): super(Model,self).__init__() self.dense = tf.keras.layers.Dense(1) def __call__(self,x): return self.dense(x) ``` 会报错`__call__() got an unexpected keyword argument 'training'` 需要将`__call__`修改为`call` 这两者区别是什么?
ubuntu下调用keras报错:No module named 'error'
cuda9.0和TensorFlow1.8.0已安装 import tensorflow也没有问题,就是再import keras出错,求大神解答! 报错如下: Using TensorFlow backend. Traceback (most recent call last): File "/home/zhangzhiyang/PycharmProjects/tensorflow1/test_keras.py", line 2, in <module> import keras File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/__init__.py", line 3, in <module> from . import utils File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/utils/__init__.py", line 26, in <module> from .multi_gpu_utils import multi_gpu_model File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/utils/multi_gpu_utils.py", line 7, in <module> from ..layers.merge import concatenate File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/layers/__init__.py", line 4, in <module> from ..engine.base_layer import Layer File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/engine/__init__.py", line 7, in <module> from .network import get_source_inputs File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/engine/network.py", line 9, in <module> import yaml File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/yaml/__init__.py", line 2, in <module> from error import * ModuleNotFoundError: No module named 'error' 我的版本:tensorflow1.8.0,cuda9.0,cuDNN7,anaconda3,python3.6.5 我的tensorflow和keras安装路径均为anaconda3/envs/tensorflow/lib/python3.6/site-packages 我的.bashrc文件如下: export PATH="/home/zhangzhiyang/anaconda3/bin:$PATH" export LD_LIBRARY_PATH="/home/zhangzhiyang/newdisk/cuda-9.0/lib64:$LD_LIBRARY_PATH" export PATH="/home/zhangzhiyang/newdisk/cuda-9.0/bin:$PATH" export CUDA_HOME=$CUDA_HOME:"/home/zhangzhiyang/newdisk/cuda-9.0" 个人推测可能是python版本的问题,但不知如何解决,我第一次pip Keras未指定安装路径,结果keras安装在了python2.7下,这次我指定了路径为python3.6/site_packages,但是报了如上错误,是否keras不支持python3? 求大神解答!
Tensorflow 2.0 : When using data tensors as input to a model, you should specify the `steps_per_epoch` argument.
下面代码每次执行到epochs 中的最后一个step 都会报错,请教大牛这是什么问题呢? ``` import tensorflow_datasets as tfds dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True) train_dataset,test_dataset = dataset['train'],dataset['test'] tokenizer = info.features['text'].encoder print('vocabulary size: ', tokenizer.vocab_size) sample_string = 'Hello world, tensorflow' tokenized_string = tokenizer.encode(sample_string) print('tokened id: ', tokenized_string) src_string= tokenizer.decode(tokenized_string) print(src_string) for t in tokenized_string: print(str(t) + ': '+ tokenizer.decode([t])) BUFFER_SIZE=6400 BATCH_SIZE=64 num_train_examples = info.splits['train'].num_examples num_test_examples=info.splits['test'].num_examples print("Number of training examples: {}".format(num_train_examples)) print("Number of test examples: {}".format(num_test_examples)) train_dataset=train_dataset.shuffle(BUFFER_SIZE) train_dataset=train_dataset.padded_batch(BATCH_SIZE,train_dataset.output_shapes) test_dataset=test_dataset.padded_batch(BATCH_SIZE,test_dataset.output_shapes) def get_model(): model=tf.keras.Sequential([ tf.keras.layers.Embedding(tokenizer.vocab_size,64), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)), tf.keras.layers.Dense(64,activation='relu'), tf.keras.layers.Dense(1,activation='sigmoid') ]) return model model =get_model() model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) import math #from tensorflow import keras #train_dataset= keras.preprocessing.sequence.pad_sequences(train_dataset, maxlen=BUFFER_SIZE) history =model.fit(train_dataset, epochs=2, steps_per_epoch=(math.ceil(BUFFER_SIZE/BATCH_SIZE) -90 ), validation_data= test_dataset) ``` Train on 10 steps Epoch 1/2 9/10 [==========================>...] - ETA: 3s - loss: 0.6955 - accuracy: 0.4479 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-111-8ddec076c096> in <module> 6 epochs=2, 7 steps_per_epoch=(math.ceil(BUFFER_SIZE/BATCH_SIZE) -90 ), ----> 8 validation_data= test_dataset) /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 726 max_queue_size=max_queue_size, 727 workers=workers, --> 728 use_multiprocessing=use_multiprocessing) 729 730 def evaluate(self, /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs) 672 validation_steps=validation_steps, 673 validation_freq=validation_freq, --> 674 steps_name='steps_per_epoch') 675 676 def evaluate(self, /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 437 validation_in_fit=True, 438 prepared_feed_values_from_dataset=(val_iterator is not None), --> 439 steps_name='validation_steps') 440 if not isinstance(val_results, list): 441 val_results = [val_results] /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 174 if not is_dataset: 175 num_samples_or_steps = _get_num_samples_or_steps(ins, batch_size, --> 176 steps_per_epoch) 177 else: 178 num_samples_or_steps = steps_per_epoch /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in _get_num_samples_or_steps(ins, batch_size, steps_per_epoch) 491 return steps_per_epoch 492 return training_utils.check_num_samples(ins, batch_size, steps_per_epoch, --> 493 'steps_per_epoch') 494 495 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_num_samples(ins, batch_size, steps, steps_name) 422 raise ValueError('If ' + steps_name + 423 ' is set, the `batch_size` must be None.') --> 424 if check_steps_argument(ins, steps, steps_name): 425 return None 426 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_steps_argument(input_data, steps, steps_name) 1199 raise ValueError('When using {input_type} as input to a model, you should' 1200 ' specify the `{steps_name}` argument.'.format( -> 1201 input_type=input_type_str, steps_name=steps_name)) 1202 return True 1203 ValueError: When using data tensors as input to a model, you should specify the `steps_per_epoch` argument.
为什么我在predict_classes(x)中的x用了很多格式但总是报错?
#BP人工神经网络的实现 #1、读取数据 #2、keras.models Sequential /keras.layers.core Dense Activation #3、Sequential建立模型 #4、Dense建立层 #5、Activation激活函数 #6、compile模型编译 #7、fit训练(学习) #8、验证(测试,分类预测) #使用人工神经网络预测课程销量 #数据的读取与整理 import pandas as pda import numpy as npy fname = 'D:\\shuju\\fenleisuanfa\\lesson2.csv' dataf = pda.read_csv(fname) x = dataf.iloc[:,1:5].values y = dataf.iloc[:,5:6].values for i in range(0,len(x)): for j in range(0,len(x[i])): thisdata = x[i][j] if(thisdata =='是' or thisdata == '多' or thisdata == '高'): x[i][j] = 1 else: x[i][j] = 0 for i in range(0,len(y)): thisdata = y[i] if(thisdata == '高'): y[i] = 1 else: y[i] = 0 xf = pda.DataFrame(x) yf = pda.DataFrame(y) x2 = xf.values.astype(int) y2 = yf.values.astype(int) #使用人工神经网络模型 from keras.models import Sequential from keras.layers.core import Dense,Activation import keras.preprocessing.text as t from keras.preprocessing.text import Tokenizer as tk from keras.preprocessing.text import text_to_word_sequence model = Sequential() #输入层 model.add(Dense(10,input_dim = len(x2[0]))) model.add(Activation('relu')) #输出层 model.add(Dense(1,input_dim = 1)) model.add(Activation('sigmoid')) #模型的编译 model.compile(loss = 'binary_crossentropy',optimizer = 'adam', metrics = ['accuracy']) #训练 rst = model.fit(x2,y2,epochs = 10,batch_size = 100) #预测分类 model.predict_classes(x).reshape(len(x)) ![图片说明](https://img-ask.csdn.net/upload/201909/11/1568184347_152341.jpg) ![图片说明](https://img-ask.csdn.net/upload/201909/11/1568184147_136600.jpg) ![图片说明](https://img-ask.csdn.net/upload/201909/11/1568184174_122879.jpg)
TensorFlow的Keras如何使用Dataset作为数据输入?
当我把dataset作为输入数据是总会报出如下错误,尽管我已经在数据解析那里reshape了图片大小为(512,512,1),请问该如何修改? ``` ValueError: Error when checking input: expected conv2d_input to have 4 dimensions, but got array with shape (None, 1) ``` **图片大小定义** ``` import tensorflow as tf from tensorflow import keras IMG_HEIGHT = 512 IMG_WIDTH = 512 IMG_CHANNELS = 1 IMG_PIXELS = IMG_CHANNELS * IMG_HEIGHT * IMG_WIDTH ``` **解析函数** ``` def parser(record): features = tf.parse_single_example(record, features={ 'image_raw': tf.FixedLenFeature([], tf.string), 'label': tf.FixedLenFeature([23], tf.int64) }) image = tf.decode_raw(features['image_raw'], tf.uint8) label = tf.cast(features['label'], tf.int32) image.set_shape([IMG_PIXELS]) image = tf.reshape(image, [IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS]) image = tf.cast(image, tf.float32) return image, label ``` **模型构建** ``` dataset = tf.data.TFRecordDataset([TFRECORD_PATH]) dataset.map(parser) dataset = dataset.repeat(10*10).batch(10) model = keras.Sequential([ keras.layers.Conv2D(filters=32, kernel_size=(5, 5), padding='same', activation='relu', input_shape=(512, 512, 1)), keras.layers.MaxPool2D(pool_size=(2, 2)), keras.layers.Dropout(0.25), keras.layers.Conv2D(filters=64, kernel_size=(3, 3), padding='same', activation='relu'), keras.layers.MaxPool2D(pool_size=(2, 2)), keras.layers.Dropout(0.25), keras.layers.Flatten(), keras.layers.Dense(128, activation='relu'), keras.layers.Dropout(0.25), keras.layers.Dense(23, activation='softmax') ]) model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.sparse_categorical_crossentropy, metrics=[tf.keras.metrics.categorical_accuracy]) model.fit(dataset.make_one_shot_iterator(), epochs=10, steps_per_epoch=10) ```
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误,查了一下说是gpu被占用了,从下面这里开始出问题的: ``` 2019-10-17 09:28:49.495166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6382 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) (60000, 28, 28) (60000, 10) 2019-10-17 09:28:51.275415: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cublas64_100.dll'; dlerror: cublas64_100.dll not found ``` ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277238_292620.png) 最后显示的问题: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277311_655722.png) 试了一下网上的方法,比如加代码: ``` gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) ``` 但最后提示: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277460_72752.png) 现在不知道要怎么解决了。新手想试下简单的数字识别,步骤也是按教程一步步来的,可能用的版本和教程不一样,我用的是刚下的:2.0tensorflow和以下: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277627_439100.png) 不知道会不会有版本问题,现在紧急求助各位大佬,还有没有其它可以尝试的方法。测试程序加法运算可以执行,数字识别图片运行的时候我看了下,GPU最大占有率才0.2%,下面是完整数字图片识别代码: ``` import os import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers, optimizers, datasets os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' #gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2) #sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) (x, y), (x_val, y_val) = datasets.mnist.load_data() x = tf.convert_to_tensor(x, dtype=tf.float32) / 255. y = tf.convert_to_tensor(y, dtype=tf.int32) y = tf.one_hot(y, depth=10) print(x.shape, y.shape) train_dataset = tf.data.Dataset.from_tensor_slices((x, y)) train_dataset = train_dataset.batch(200) model = keras.Sequential([ layers.Dense(512, activation='relu'), layers.Dense(256, activation='relu'), layers.Dense(10)]) optimizer = optimizers.SGD(learning_rate=0.001) def train_epoch(epoch): # Step4.loop for step, (x, y) in enumerate(train_dataset): with tf.GradientTape() as tape: # [b, 28, 28] => [b, 784] x = tf.reshape(x, (-1, 28 * 28)) # Step1. compute output # [b, 784] => [b, 10] out = model(x) # Step2. compute loss loss = tf.reduce_sum(tf.square(out - y)) / x.shape[0] # Step3. optimize and update w1, w2, w3, b1, b2, b3 grads = tape.gradient(loss, model.trainable_variables) # w' = w - lr * grad optimizer.apply_gradients(zip(grads, model.trainable_variables)) if step % 100 == 0: print(epoch, step, 'loss:', loss.numpy()) def train(): for epoch in range(30): train_epoch(epoch) if __name__ == '__main__': train() ``` 希望能有人给下建议或解决方法,拜谢!
Segnet网络用keras实现的时候报错ValueError,求大神帮忙看看
![图片说明](https://img-ask.csdn.net/upload/201904/05/1554454470_801036.jpg) 报错为:Error when checking target: expected activation_1 to have 3 dimensions, but got array with shape (32, 10) keras+tensorflow后端 代码如下 ``` # coding=utf-8 import matplotlib from PIL import Image matplotlib.use("Agg") import matplotlib.pyplot as plt import argparse import numpy as np from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, UpSampling2D, BatchNormalization, Reshape, Permute, Activation, Flatten # from keras.utils.np_utils import to_categorical # from keras.preprocessing.image import img_to_array from keras.models import Model from keras.layers import Input from keras.callbacks import ModelCheckpoint # from sklearn.preprocessing import LabelBinarizer # from sklearn.model_selection import train_test_split # import pickle import matplotlib.pyplot as plt import os from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) path = '/tmp/2' os.chdir(path) training_set = train_datagen.flow_from_directory( 'trainset', target_size=(64,64), batch_size=32, class_mode='categorical', shuffle=True) test_set = test_datagen.flow_from_directory( 'testset', target_size=(64,64), batch_size=32, class_mode='categorical', shuffle=True) def SegNet(): model = Sequential() # encoder model.add(Conv2D(64, (3, 3), strides=(1, 1), input_shape=(64, 64, 3), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(64, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (128,128) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (64,64) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (32,32) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (16,16) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (8,8) # decoder model.add(UpSampling2D(size=(2, 2))) # (16,16) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (32,32) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (64,64) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (128,128) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (256,256) model.add(Conv2D(64, (3, 3), strides=(1, 1), input_shape=(64, 64, 3), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(64, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(10, (1, 1), strides=(1, 1), padding='valid', activation='relu')) model.add(BatchNormalization()) model.add(Reshape((64*64, 10))) # axis=1和axis=2互换位置,等同于np.swapaxes(layer,1,2) model.add(Permute((2, 1))) #model.add(Flatten()) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) model.summary() return model def main(): model = SegNet() filepath = "/tmp/2/weights.best.hdf5" checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') callbacks_list = [checkpoint] history = model.fit_generator( training_set, steps_per_epoch=(training_set.samples / 32), epochs=20, callbacks=callbacks_list, validation_data=test_set, validation_steps=(test_set.samples / 32)) # Plotting the Loss and Classification Accuracy model.metrics_names print(history.history.keys()) # "Accuracy" plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('Model Accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # "Loss" plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() if __name__ == '__main__': main() ``` 主要是这里,segnet没有全连接层,最后输出的应该是一个和输入图像同等大小的有判别标签的shape吗。。。求教怎么改。 输入图像是64 64的,3通道,总共10类,分别放在testset和trainset两个文件夹里
相见恨晚的超实用网站
搞学习 知乎:www.zhihu.com 简答题:http://www.jiandati.com/ 网易公开课:https://open.163.com/ted/ 网易云课堂:https://study.163.com/ 中国大学MOOC:www.icourse163.org 网易云课堂:study.163.com 哔哩哔哩弹幕网:www.bilibili.com 我要自学网:www.51zxw
花了20分钟,给女朋友们写了一个web版群聊程序
参考博客 [1]https://www.byteslounge.com/tutorials/java-ee-html5-websocket-example
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入 假设现有4个人...
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
这30个CSS选择器,你必须熟记(上)
关注前端达人,与你共同进步CSS的魅力就是让我们前端工程师像设计师一样进行网页的设计,我们能轻而易举的改变颜色、布局、制作出漂亮的影音效果等等,我们只需要改几行代码,不需...
国产开源API网关项目进入Apache孵化器:APISIX
点击蓝色“程序猿DD”关注我回复“资源”获取独家整理的学习资料!近日,又有一个开源项目加入了这个Java开源界大名鼎鼎的Apache基金会,开始进行孵化器。项目名称:AP...
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
编写Spring MVC控制器的14个技巧
本期目录 1.使用@Controller构造型 2.实现控制器接口 3.扩展AbstractController类 4.为处理程序方法指定URL映射 5.为处理程序方法指定HTTP请求方法 6.将请求参数映射到处理程序方法 7.返回模型和视图 8.将对象放入模型 9.处理程序方法中的重定向 10.处理表格提交和表格验证 11.处理文件上传 12.在控制器中自动装配业务类 ...
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
求小姐姐抠图竟遭白眼?痛定思痛,我决定用 Python 自力更生!
点击蓝色“Python空间”关注我丫加个“星标”,每天一起快乐的学习大家好,我是 Rocky0429,一个刚恰完午饭,正在用刷网页浪费生命的蒟蒻...一堆堆无聊八卦信息的网页内容慢慢使我的双眼模糊,一个哈欠打出了三斤老泪,就在此时我看到了一张图片:是谁!是谁把我女朋友的照片放出来的!awsl!太好看了叭...等等,那个背景上的一堆鬼画符是什么鬼?!真是看不下去!叔叔婶婶能忍,隔壁老王的三姨妈的四表...
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
不要偷偷的查询我的个人资料,即使你再喜欢我,也不要这样,真的不好;
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1...
相关热词 c# 二进制截断字符串 c#实现窗体设计器 c#检测是否为微信 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片
立即提问