keras报错:All inputs to the layer should be tensors.

深度学习小白,初次使用keras构建网络,遇到问题向各位大神请教:

from keras.models import Sequential

from keras.layers import Embedding

from keras.layers import Dense, Activation

from keras.layers import Concatenate

from keras.layers import Add

构建了一些嵌入层_

model_store = Embedding(1115, 10)

model_dow = Embedding(7, 6)

model_day = Embedding(31, 10)

model_month = Embedding(12, 6)

model_year = Embedding(3, 2)

model_promotion = Embedding(2, 1)

model_state = Embedding(12, 6)

将这些嵌入层连接起来

output_embeddings = [model_store, model_dow, model_day, model_month, model_year, model_promotion, model_state]

output_model = Concatenate()(output_embeddings)

运行报错:


ValueError Traceback (most recent call last)

D:\python\lib\site-packages\keras\engine\base_layer.py in assert_input_compatibility(self, inputs)
278 try:
--> 279 K.is_keras_tensor(x)
280 except ValueError:

D:\python\lib\site-packages\keras\backend\tensorflow_backend.py in is_keras_tensor(x)
473 raise ValueError('Unexpectedly found an instance of type ' +
--> 474 str(type(x)) + '
. '
475 'Expected a symbolic tensor instance.')

ValueError: Unexpectedly found an instance of type <class 'keras.layers.embeddings.Embedding'>. Expected a symbolic tensor instance.

During handling of the above exception, another exception occurred:

ValueError Traceback (most recent call last)
in
----> 1 output_model = Concatenate()(output_embeddings)

D:\python\lib\site-packages\keras\engine\base_layer.py in call(self, inputs, **kwargs)
412 # Raise exceptions in case the input is not compatible
413 # with the input_spec specified in the layer constructor.
--> 414 self.assert_input_compatibility(inputs)
415
416 # Collect input shapes to build layer.

D:\python\lib\site-packages\keras\engine\base_layer.py in assert_input_compatibility(self, inputs)
283 'Received type: ' +
284 str(type(x)) + '. Full input: ' +
--> 285 str(inputs) + '. All inputs to the layer '
286 'should be tensors.')
287

ValueError: Layer concatenate_5 was called with an input that isn't a symbolic tensor.
Received type: .
Full input: [, , , , , , ].
All inputs to the layer should be tensors.

报错提示是:所有层的输入应该为张量,请问应该怎么修改呢?麻烦了!

2个回答

angeliacmm
angeliacmm 请问解决了吗
2 个月之前 回复
weixin_43288916
清爽的风 希望可以解决你的问题
9 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
ubuntu下调用keras报错:No module named 'error'

cuda9.0和TensorFlow1.8.0已安装 import tensorflow也没有问题,就是再import keras出错,求大神解答! 报错如下: Using TensorFlow backend. Traceback (most recent call last): File "/home/zhangzhiyang/PycharmProjects/tensorflow1/test_keras.py", line 2, in <module> import keras File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/__init__.py", line 3, in <module> from . import utils File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/utils/__init__.py", line 26, in <module> from .multi_gpu_utils import multi_gpu_model File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/utils/multi_gpu_utils.py", line 7, in <module> from ..layers.merge import concatenate File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/layers/__init__.py", line 4, in <module> from ..engine.base_layer import Layer File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/engine/__init__.py", line 7, in <module> from .network import get_source_inputs File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/engine/network.py", line 9, in <module> import yaml File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/yaml/__init__.py", line 2, in <module> from error import * ModuleNotFoundError: No module named 'error' 我的版本:tensorflow1.8.0,cuda9.0,cuDNN7,anaconda3,python3.6.5 我的tensorflow和keras安装路径均为anaconda3/envs/tensorflow/lib/python3.6/site-packages 我的.bashrc文件如下: export PATH="/home/zhangzhiyang/anaconda3/bin:$PATH" export LD_LIBRARY_PATH="/home/zhangzhiyang/newdisk/cuda-9.0/lib64:$LD_LIBRARY_PATH" export PATH="/home/zhangzhiyang/newdisk/cuda-9.0/bin:$PATH" export CUDA_HOME=$CUDA_HOME:"/home/zhangzhiyang/newdisk/cuda-9.0" 个人推测可能是python版本的问题,但不知如何解决,我第一次pip Keras未指定安装路径,结果keras安装在了python2.7下,这次我指定了路径为python3.6/site_packages,但是报了如上错误,是否keras不支持python3? 求大神解答!

用pip方式安装keras报错

我是这样按教程这样做的,先activate tensorflow,然后pip install keras,报错更新完pip后还是有一大堆红色的报错,感觉好多文件出错了,该怎么办?救救孩子吧!![图片说明](https://img-ask.csdn.net/upload/201904/21/1555841668_663068.png)![图片说明](https://img-ask.csdn.net/upload/201904/21/1555841707_360847.png)

Keras报错 ‘ValueError: 'pool5' is not in list’

很长的一个project,在keras下实现VGG16。 这是报错的整个代码段: ``` for roi, roi_context in zip(rois, rois_context): ins = [im_in, dmap_in, np.array([roi]), np.array([roi_context])] print("Testing ROI {c}") subtimer.tic() blobs_out = model.predict(ins) subtimer.toc() print("Storing Results") print(layer_names) post_roi_layers = set(layer_names[layer_names.index("pool5"):]) for name, val in zip(layer_names, blobs_out): if name not in outs: outs[name] = val else: if name in post_roi_layers: outs[name] = np.concatenate([outs[name], val]) c += 1 ``` 报错信息: ``` Loading Test Data data is loaded from roidb_test_19_smol.pkl Number of Images to test: 10 Testing ROI {c} Storing Results ['cls_score', 'bbox_pred_3d'] Traceback (most recent call last): File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/main.py", line 6, in <module> results = test_main.test_tf_implementation(cache_file="roidb_test_19_smol.pkl", weights_path="rgbd_det_iter_40000.h5") File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/test_main.py", line 36, in test_tf_implementation results = test.test_net(tf_model, roidb) File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/test.py", line 324, in test_net im_detect_3d(net, im, dmap, test['boxes'], test['boxes_3d'], test['rois_context']) File "/Users/xijiejiao/Amodal3Det_TF/tfmodel/test.py", line 200, in im_detect_3d post_roi_layers = set(layer_names[layer_names.index("pool5"):]) ValueError: 'pool5' is not in list ```

python报错:AttributeError: module 'curses' has no attribute 'wrapper'

windows平台 python3.7.0 在vscode下运行程序时 其中curses.wrapper(main)报错 ``` PS E:\dai ma\aaa> C:/Users/夏洛洛/AppData/Local/Programs/Python/Python37/python.exe "e:/dai ma/aaa/项目/2048.py" Traceback (most recent call last): File "e:/dai ma/aaa/项目/2048.py", line 219, in <module> curses.wrapper(main) AttributeError: module 'curses' has no attribute 'wrapper' ``` 查询了下这个错误,可能是文件名并没有发生冲突,但换文件名后并没有排除。难道是windows平台问题吗? 求解,感激不尽。

Python 报错: not enough values to unpack (expected 3, got 1),该怎么解决?求大神解决!

![图片说明](https://img-ask.csdn.net/upload/201910/22/1571745918_625213.png)![图片说明](https://img-ask.csdn.net/upload/201910/22/1571745931_336418.png) 代码如下: # -*- coding:utf-8 -*- import numpy as np import pandas as pd from collections import Counter from sklearn import preprocessing import scipy import sys import os path1=os.path.abspath('.') print(path1) name=pd.read_table("genotype.sav",header=0,sep=',') print(name) print(name.columns) for i in name.columns: a,b,c=Counter(name[i]).keys() if a[0]==a[1]: #print(keys[0],keys[1]) name[i].replace(a, 0, inplace=True) name[i].replace(b, 1, inplace=True) name[i].replace(c, 2, inplace=True) elif a[0]!=a[1]: name[i].replace(a, 1, inplace=True) name[i].replace(b, 0, inplace=True) name[i].replace(c, 2, inplace=True) #print(keys) #print(name[i]) name.to_csv('rename.csv') #recode_ID()

keras中model.evaluate()报错:'numpy.float64' object is not iterable

``` x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.25) mean = x_train.mean(axis=0) std = x_train.std(axis=0) train_data = (x_train - mean) / std test_data = (x_test - mean) / std model = Sequential([Dense(64, input_shape=(6,)), Activation('relu'), Dense(32), Activation('relu'), Dense(1)]) sgd = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='mean_squared_error', optimizer=sgd) k = model.fit [loss, sgd] = model.evaluate(test_data, y_test, verbose=1) ``` 最后一步不知道哪出了问题。。test_data, y_test都是dataframe啊 TypeError Traceback (most recent call last) <ipython-input-29-3b0767a3c446> in <module> ----> 1 [loss, mse] = model.evaluate(test_data, y_test, verbose=1) TypeError: 'numpy.float64' object is not iterable

BERT模型训练报错:IndexError: list index out of range,求大佬指教!

![图片说明](https://img-ask.csdn.net/upload/202004/29/1588175660_746755.png) 运行结果: ``` C:\Users\DELL\Anaconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) C:\Users\DELL\Anaconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) C:\Users\DELL\Anaconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) C:\Users\DELL\Anaconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) C:\Users\DELL\Anaconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) C:\Users\DELL\Anaconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) Traceback (most recent call last): File "D:/senti/code/Bert/run_classifier.py", line 1024, in <module> tf.app.run() File "C:\Users\DELL\Anaconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run _sys.exit(main(argv)) File "D:/senti/code/Bert/run_classifier.py", line 885, in main train_examples = processor.get_train_examples(FLAGS.data_dir) File "D:/senti/code/Bert/run_classifier.py", line 385, in get_train_examples self._read_tsv(os.path.join(data_dir, "train.tsv")), "train") File "D:/senti/code/Bert/run_classifier.py", line 408, in _create_examples text_a = tokenization.convert_to_unicode(line[1]) IndexError: list index out of range ```

报错:sys.argv[1] IndexError: list index out of range?

运行时报错:firstFolder = sys.argv[1] IndexError: list index out of range 怎么回事? ``` import numpy as np import cv2 import sys from matplotlib import pyplot as plt # img = cv2.imread('logo.png',0) # # Initiate ORB detector # orb = cv2.ORB_create() # # find the keypoints with ORB # kp = orb.detect(img,None) # # compute the descriptors with ORB # kp, des = orb.compute(img, kp) # # draw only keypoints location,not size and orientation # img2 = cv2.drawKeypoints(img, kp, None, color=(0,255,0), flags=0) # plt.imshow(img2), plt.show() from os import listdir from os.path import isfile, join class Application: def __init__(self, extractor, detector): self.extractor = extractor self.detector = detector def train_vocabulary(self, file_list, vocabulary_size): kmeans_trainer = cv2.BOWKMeansTrainer(vocabulary_size) for path_to_image in file_list: img = cv2.imread(path_to_image, 0) kp, des = self.detector.detectAndCompute(img, None) kmeans_trainer.add(des) return kmeans_trainer.cluster() def extract_features_from_image(self, file_name): image = cv2.imread(file_name) return self.extractor.compute(image, self.detector.detect(image)) def extract_train_data(self, file_list, category): train_data, train_responses = [], [] for path_to_file in file_list: train_data.extend(self.extract_features_from_image(path_to_file)) train_responses.append(category) return train_data, train_responses def train_classifier(self, data, responses): n_trees = 200 max_depth = 10 model = cv2.ml.RTrees_create() eps = 1 criteria = (cv2.TERM_CRITERIA_MAX_ITER, n_trees, eps) model.setTermCriteria(criteria) model.setMaxDepth(max_depth) model.train(np.array(data), cv2.ml.ROW_SAMPLE, np.array(responses)) return model def predict(self, file_name): features = self.extract_features_from_image(file_name) return self.classifier.predict(features)[0] def train(self, files_array, vocabulary_size=12): all_categories = [] for category in files_array: all_categories += category vocabulary = self.train_vocabulary(all_categories, vocabulary_size) self.extractor.setVocabulary(vocabulary) data = [] responses = [] for id in range(len(files_array)): data_temp, responses_temp = self.extract_train_data(files_array[id], id) data += data_temp responses += responses_temp self.classifier = self.train_classifier(data, responses) def error(self, file_list, category): responses = np.array([self.predict(file) for file in file_list]) _responses = np.array([category for _ in range(len(responses))]) return 1 - np.sum(responses == _responses) / len(responses) def get_images_from_folder(folder): return ["%s/%s" % (folder, f) for f in listdir(folder) if isfile(join(folder, f))] def start(folders, detector_type, voc_size, train_proportion): if detector_type == "SIFT": # "Scale Invariant Feature Transform" extract = cv2.xfeatures2d.SIFT_create() detector = cv2.xfeatures2d.SIFT_create() else: # "Speeded up Robust Features" extract = cv2.xfeatures2d.SURF_create() detector = cv2.xfeatures2d.SURF_create() flann_params = dict(algorithm=1, trees=5) matcher = cv2.FlannBasedMatcher(flann_params, {}) extractor = cv2.BOWImgDescriptorExtractor(extract, matcher) train = [] test = [] for folder in folders: images = get_images_from_folder(folder) np.random.shuffle(images) slice = int(len(images) * train_proportion) train_images = images[0:slice] test_images = images[slice:] train.append(train_images) test.append(test_images) app = Application(extractor, detector) app.train(train, voc_size) total_error = 0.0 for id in range(len(test)): print(app.error(train[id], id)) test_error = app.error(test[id], id) print(test_error) print("---------") total_error = total_error + test_error total_error = total_error / float(len(test)) print("Total error = %f" % total_error) firstFolder = sys.argv[1] secondFolder = sys.argv[2] detectorType = sys.argv[3] vocSize = int(sys.argv[4]) trainProportion = float(sys.argv[5]) start([firstFolder, secondFolder], detectorType, vocSize, trainProportion) ```

Theano 报错:Wrong number of dimensions...

错误出现在:...(l2.b, l2.b - learning__rate * gb2)]) TypeError: ('Bad input argument to theano function with name "C:/Users/Administrator/Desktop/...python/theano/Regularization....py:66" at index 1(0-based)', 'Wrong number of dimensions: expected 2, got 1 with shape (200,).') ```__ import theano from sklearn.datasets import load_boston import theano.tensor as T import numpy as np import matplotlib.pyplot as plt class Layer(object):#定义神经层 def __init__(self, inputs, in_size, out_size, activation_function=None): self.W = theano.shared(np.random.normal(0, 1, (in_size, out_size))) self.b = theano.shared(np.zeros((out_size, )) + 0.1) self.Wx_plus_b = T.dot(inputs, self.W) + self.b self.activation_function = activation_function if activation_function is None: self.outputs = self.Wx_plus_b else: self.outputs = self.activation_function(self.Wx_plus_b) def minmax_normalization(data):#正则化数据 xs_max = np.max(data, axis=0) xs_min = np.min(data, axis=0) xs = (1 - 0) * (data - xs_min) / (xs_max - xs_min) + 0 return xs N=400 feats=28 lamda=0.1 np.random.seed(100) x_data = rng.randn(N, feats) x_data = minmax_normalization(x_data) y_data = rng.randint(size=N, low=0, high=2) x_train, y_train = x_data[:200], y_data[:200] x_test, y_test = x_data[200:], y_data[200:] x = T.dmatrix("x") y = T.dmatrix("y") l1 = Layer(x, 13, 50, T.tanh) l2 = Layer(l1.outputs, 50, 1, None) cost = T.mean(T.square(l2.outputs - y)) + lamda * ((l1.W ** 2).sum() + (l2.W ** 2).sum()) gW1, gb1, gW2, gb2 = T.grad(cost, [l1.W, l1.b, l2.W, l2.b]) learning_rate = 0.01 train = theano.function( inputs=[x, y], updates=[(l1.W, l1.W - learning_rate * gW1), (l1.b, l1.b - learning_rate * gb1), (l2.W, l2.W - learning_rate * gW2), (l2.b, l2.b - learning_rate * gb2)]) compute_cost = theano.function(inputs=[x, y], outputs=cost) train_err_list = [] test_err_list = [] learning_time = [] for i in range(1000): train(x_train, y_train) if i % 10 == 0: # record cost train_err_list.append(compute_cost(x_train, y_train)) test_err_list.append(compute_cost(x_test, y_test)) learning_time.append(i) plt.plot(learning_time, train_err_list, 'r-') plt.plot(learning_time, test_err_list, 'b--') plt.show() ``` __

spyder import TensorFlow 或者 keras时不报错,程序终止。

spyder import TensorFlow 或者 keras时不报错,程序终止。 后面所有结果都没有出来,求助!!如何解决!!!

训练dnn网络,添加全连接层,keras报错

![图片说明](https://img-ask.csdn.net/upload/201804/09/1523244974_485144.png) 更改了keras的版本号,依然报错

keras使用报出OMP问题

楼主使用ubuntu16.04,采用anaconda3配置的tensorflow1.13.1和keras2.2.4 但之前使用还不报错,因为不可抗力重装环境之后出现如图所示的omp线程问题。困扰很久,百思不得其解,望诸位大神告知。 截图如下: ![图片说明](https://img-ask.csdn.net/upload/201904/18/1555586019_885202.png) 报出

Keras 错误'ProgbarLogger' no attribute 'log_values'

用keras(TensorFlow后端)跑LSMT的时候出现了下面的问题。 Traceback (most recent call last): File "/Users/zhaojing/PycharmProjects/CWS_02/lstm01.py", line 94, in <module> model.fit(np.array(list(d['x'])).reshape(-1,maxlen), np.array(list(d['y'])).reshape((-1,maxlen,5)), batch_size=batch_size, epochs=1) File "/usr/local/lib/python3.6/site-packages/keras/engine/training.py", line 1705, in fit validation_steps=validation_steps) File "/usr/local/lib/python3.6/site-packages/keras/engine/training.py", line 1256, in _fit_loop callbacks.on_epoch_end(epoch, epoch_logs) File "/usr/local/lib/python3.6/site-packages/keras/callbacks.py", line 77, in on_epoch_end callback.on_epoch_end(epoch, logs) File "/usr/local/lib/python3.6/site-packages/keras/callbacks.py", line 339, in on_epoch_end self.progbar.update(self.seen, self.log_values) AttributeError: 'ProgbarLogger' object has no attribute 'log_values' 请问这个问题该怎么解决呢。 keras 版本:2.1.6 和TensorFlow 版本1.8.0rc1

求解报错TypeError: slice indices must be integers or None or have an __index__ method

运行环境 pycharm2019.2.3 python 3.7 TensorFlow 2.0 代码如下 ``` import tensorflow as tf import numpy as np class DataLoader(): def __init__(self): path = tf.keras.utils.get_file('nietzsche.txt', origin='https://s3.amazonaws.com/text-datasets/nietzsche.txt') with open(path, encoding='utf-8') as f: self.raw_text = f.read().lower() self.chars = sorted(list(set(self.raw_text))) self.char_indices = dict((c, i) for i, c in enumerate(self.chars)) self.indices_char = dict((i, c) for i, c in enumerate(self.chars)) self.text = [self.char_indices[c] for c in self.raw_text] def get_batch(self, seq_length, batch_size): seq = [] next_char = [] for i in range(batch_size): index = np.random.randint(0, len(self.text) - seq_length) seq.append(self.text[index:index + seq_length]) next_char.append(self.text[index + seq_length]) return np.array(seq), np.array(next_char) # [batch_size, seq_length], [num_batch] class RNN(tf.keras.Model): def __init__(self, num_chars, batch_size, seq_length): super().__init__() self.num_chars = num_chars self.seq_length = seq_length self.batch_size = batch_size self.cell = tf.keras.layers.LSTMCell(units=256) self.dense = tf.keras.layers.Dense(units=self.num_chars) def call(self, inputs, from_logits=False): inputs = tf.one_hot(inputs, depth=self.num_chars) # [batch_size, seq_length, num_chars] state = self.cell.get_initial_state(batch_size=self.batch_size, dtype=tf.float32) for t in range(self.seq_length): output, state = self.cell(inputs[:, t, :], state) logits = self.dense(output) if from_logits: return logits else: return tf.nn.softmax(logits) num_batches = 10 seq_length = 40 batch_size = 50 learning_rate = 1e-3 data_loader = DataLoader() model = RNN(num_chars=len(data_loader.chars), batch_size=batch_size, seq_length=seq_length) optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) for batch_index in range(num_batches): X, y = data_loader.get_batch(seq_length, batch_size) with tf.GradientTape() as tape: y_pred = model(X) loss = tf.keras.losses.sparse_categorical_crossentropy(y_true=y, y_pred=y_pred) loss = tf.reduce_mean(loss) print("batch %d: loss %f" % (batch_index, loss.numpy())) grads = tape.gradient(loss, model.variables) optimizer.apply_gradients(grads_and_vars=zip(grads, model.variables)) def predict(self, inputs, temperature=1.): batch_size, _ = tf.shape(inputs) logits = self(inputs, from_logits=True) prob = tf.nn.softmax(logits / temperature).numpy() return np.array([np.random.choice(self.num_chars, p=prob[i, :]) for i in range(batch_size.numpy())]) X_, _ = data_loader.get_batch(seq_length, 1) for diversity in [0.2, 0.5, 1.0, 1.2]: X = X_ print("diversity %f:" % diversity) for t in range(400): y_pred = model.predict(X, diversity) print(data_loader.indices_char[y_pred[0]], end='', flush=True) X = np.concatenate([X[:, 1:], np.expand_dims(y_pred, axis=1)], axis=-1) print("\n") ``` 报错: ``` Python 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] on win32 runfile('F:/pyth/pj3/study3.py', wdir='F:/pyth/pj3') batch 0: loss 4.041161 batch 1: loss 4.026710 batch 2: loss 4.005230 batch 3: loss 3.983728 batch 4: loss 3.920999 batch 5: loss 3.864793 batch 6: loss 3.644211 batch 7: loss 3.375458 batch 8: loss 3.620051 batch 9: loss 3.382381 diversity 0.200000: Traceback (most recent call last): File "<input>", line 1, in <module> File "D:\Program Files\JetBrains\PyCharm 2019.2.3\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "D:\Program Files\JetBrains\PyCharm 2019.2.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "F:/pyth/pj3/study3.py", line 97, in <module> y_pred = model.predict(X, diversity) File "D:\ProgramData\Anaconda3\envs\kingtf2\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 909, in predict use_multiprocessing=use_multiprocessing) File "D:\ProgramData\Anaconda3\envs\kingtf2\lib\site-packages\tensorflow_core\python\keras\engine\training_arrays.py", line 722, in predict callbacks=callbacks) File "D:\ProgramData\Anaconda3\envs\kingtf2\lib\site-packages\tensorflow_core\python\keras\engine\training_arrays.py", line 362, in model_iteration batch_ids = index_array[batch_start:batch_end] TypeError: slice indices must be integers or None or have an __index__ method ``` 可能有问题的地方: ``` for diversity in [0.2, 0.5, 1.0, 1.2]: X = X_ print("diversity %f:" % diversity) for t in range(400): y_pred = model.predict(X, diversity) print(data_loader.indices_char[y_pred[0]], end='', flush=True) X = np.concatenate([X[:, 1:], np.expand_dims(y_pred, axis=1)], axis=-1) print("\n") ``` ``` def predict(self, inputs, temperature=1.): batch_size, _ = tf.shape(inputs) logits = self(inputs, from_logits=True) prob = tf.nn.softmax(logits / temperature).numpy() return np.array([np.random.choice(self.num_chars, p=prob[i, :]) for i in range(batch_size.numpy())]) ```

keras input shape怎么写

大家好! 我在尝试使用Keras下面的LSTM做深度学习,我的数据是这样的:X-Train:30000个数据,每个数据6个数值,所以我的X_train是(30000*6) 根据keras的说明文档,input shape应该是(samples,timesteps,input_dim) 所以我觉得我的input shape应该是:input_shape=(30000,1,6),但是运行后报错: Input 0 is incompatible with layer lstm_6: expected ndim=3, found ndim=4 我觉得是input shape错了,改成(1,6)错误又变成了: ValueError: Error when checking input: expected lstm_7_input to have 3 dimensions, but got array with shape (30000, 6) 改成(30000,6)错误提示一样 我该怎么设置input shape呢,多谢!

python keras sequential输入

python keras sequential 以Convolution1D作为第一层,输入的数据应该以怎样的形式? ![图片说明](https://img-ask.csdn.net/upload/201611/13/1479043537_386017.png) ![图片说明](https://img-ask.csdn.net/upload/201611/13/1479043555_758273.png) 刚开始接触,求老师能指点一下。

keras model.fit函数报错,输入参数shape维度不正确,如何修正

使用函数 ``` model.fit(x=images, y=labels, validation_split=0.1, batch_size=batch_size, epochs=n_epochs, callbacks=callbacks, shuffle=True) ``` 由于我的训练集中image是灰色图片,所以images的shape为(2, 28, 28),导致报错Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (2, 28, 28) ,请问该如何处理

跑keras模型,设置CPU使用上限报错是怎么回事

跑个keras算法库的模型,为防止电脑被占用过多资源,我设置了CPU的使用内核数,具体代码如下: ``` import tensorflow as tf import keras.backend.tensorflow_backend as KTF os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' #忽略CPU编译不支持警告 config = tf.ConfigProto() config.device_count = {'CPU': 4} #该句代码注释掉,才不会报错 config.intra_op_parallelism_threads = 4 config.inter_op_parallelism_threads = 4 config.allow_soft_placement = True config.log_device_placement = True #不打印设备分配日志 sess = tf.Session(config=config) KTF.set_session(sess) ``` 运行发现报错:Assignment not allowed to repeated field "device_count" in protocol message object。。 很是让人奇怪~~~ 这个该怎么解决呢?

Segnet网络用keras实现的时候报错ValueError,求大神帮忙看看

![图片说明](https://img-ask.csdn.net/upload/201904/05/1554454470_801036.jpg) 报错为:Error when checking target: expected activation_1 to have 3 dimensions, but got array with shape (32, 10) keras+tensorflow后端 代码如下 ``` # coding=utf-8 import matplotlib from PIL import Image matplotlib.use("Agg") import matplotlib.pyplot as plt import argparse import numpy as np from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, UpSampling2D, BatchNormalization, Reshape, Permute, Activation, Flatten # from keras.utils.np_utils import to_categorical # from keras.preprocessing.image import img_to_array from keras.models import Model from keras.layers import Input from keras.callbacks import ModelCheckpoint # from sklearn.preprocessing import LabelBinarizer # from sklearn.model_selection import train_test_split # import pickle import matplotlib.pyplot as plt import os from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) path = '/tmp/2' os.chdir(path) training_set = train_datagen.flow_from_directory( 'trainset', target_size=(64,64), batch_size=32, class_mode='categorical', shuffle=True) test_set = test_datagen.flow_from_directory( 'testset', target_size=(64,64), batch_size=32, class_mode='categorical', shuffle=True) def SegNet(): model = Sequential() # encoder model.add(Conv2D(64, (3, 3), strides=(1, 1), input_shape=(64, 64, 3), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(64, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (128,128) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (64,64) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (32,32) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (16,16) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) # (8,8) # decoder model.add(UpSampling2D(size=(2, 2))) # (16,16) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (32,32) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(512, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (64,64) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(256, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (128,128) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(128, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(UpSampling2D(size=(2, 2))) # (256,256) model.add(Conv2D(64, (3, 3), strides=(1, 1), input_shape=(64, 64, 3), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(64, (3, 3), strides=(1, 1), padding='same', activation='relu')) model.add(BatchNormalization()) model.add(Conv2D(10, (1, 1), strides=(1, 1), padding='valid', activation='relu')) model.add(BatchNormalization()) model.add(Reshape((64*64, 10))) # axis=1和axis=2互换位置,等同于np.swapaxes(layer,1,2) model.add(Permute((2, 1))) #model.add(Flatten()) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) model.summary() return model def main(): model = SegNet() filepath = "/tmp/2/weights.best.hdf5" checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') callbacks_list = [checkpoint] history = model.fit_generator( training_set, steps_per_epoch=(training_set.samples / 32), epochs=20, callbacks=callbacks_list, validation_data=test_set, validation_steps=(test_set.samples / 32)) # Plotting the Loss and Classification Accuracy model.metrics_names print(history.history.keys()) # "Accuracy" plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('Model Accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # "Loss" plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() if __name__ == '__main__': main() ``` 主要是这里,segnet没有全连接层,最后输出的应该是一个和输入图像同等大小的有判别标签的shape吗。。。求教怎么改。 输入图像是64 64的,3通道,总共10类,分别放在testset和trainset两个文件夹里

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

删库了,我们一定要跑路吗?

在工作中,我们误删数据或者数据库,我们一定需要跑路吗?我看未必,程序员一定要学会自救,神不知鬼不觉的将数据找回。 在 mysql 数据库中,我们知道 binlog 日志记录了我们对数据库的所有操作,所以 binlog 日志就是我们自救的利器。 接下来就来开启程序员自救之路。 想要自救成功,binlog 这把利器一定要好,在自己之前,我们一定要确定我们有 binlog 这把利器,以下就是确保有 bi...

再不跳槽,应届毕业生拿的都比我多了!

跳槽几乎是每个人职业生涯的一部分,很多HR说“三年两跳”已经是一个跳槽频繁与否的阈值了,可为什么市面上有很多程序员不到一年就跳槽呢?他们不担心影响履历吗? PayScale之前发布的**《员工最短任期公司排行榜》中,两家码农大厂Amazon和Google**,以1年和1.1年的员工任期中位数分列第二、第四名。 PayScale:员工最短任期公司排行榜 意外的是,任期中位数极小的这两家公司,薪资...

我以为我学懂了数据结构,直到看了这个导图才发现,我错了

数据结构与算法思维导图

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

华为初面+综合面试(Java技术面)附上面试题

华为面试整体流程大致分为笔试,性格测试,面试,综合面试,回学校等结果。笔试来说,华为的难度较中等,选择题难度和网易腾讯差不多。最后的代码题,相比下来就简单很多,一共3道题目,前2题很容易就AC,题目已经记不太清楚,不过难度确实不大。最后一题最后提交的代码过了75%的样例,一直没有发现剩下的25%可能存在什么坑。 笔试部分太久远,我就不怎么回忆了。直接将面试。 面试 如果说腾讯的面试是挥金如土...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

讲一个程序员如何副业月赚三万的真实故事

loonggg读完需要3分钟速读仅需 1 分钟大家好,我是你们的校长。我之前讲过,这年头,只要肯动脑,肯行动,程序员凭借自己的技术,赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

外包程序员的幸福生活

今天给你们讲述一个外包程序员的幸福生活。男主是Z哥,不是在外包公司上班的那种,是一名自由职业者,接外包项目自己干。接下来讲的都是真人真事。 先给大家介绍一下男主,Z哥,老程序员,是我十多年前的老同事,技术大牛,当过CTO,也创过业。因为我俩都爱好喝酒、踢球,再加上住的距离不算远,所以一直也断断续续的联系着,我对Z哥的状况也有大概了解。 Z哥几年前创业失败,后来他开始干起了外包,利用自己的技术能...

现代的 “Hello, World”,可不仅仅是几行代码而已

作者 |Charles R. Martin译者 | 弯月,责编 | 夕颜头图 |付费下载自视觉中国出品 | CSDN(ID:CSDNnews)新手...

!大部分程序员只会写3年代码

如果世界上都是这种不思进取的软件公司,那别说大部分程序员只会写 3 年代码,恐怕就没有程序员这种职业。

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

HTTP与HTTPS的区别

面试官问HTTP与HTTPS的区别,我这样回答让他竖起大拇指!

程序员毕业去大公司好还是小公司好?

虽然大公司并不是人人都能进,但我仍建议还未毕业的同学,尽力地通过校招向大公司挤,但凡挤进去,你这一生会容易很多。 大公司哪里好?没能进大公司怎么办?答案都在这里了,记得帮我点赞哦。 目录: 技术氛围 内部晋升与跳槽 啥也没学会,公司倒闭了? 不同的人脉圈,注定会有不同的结果 没能去大厂怎么办? 一、技术氛围 纵观整个程序员技术领域,哪个在行业有所名气的大牛,不是在大厂? 而且众所...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

终于懂了TCP和UDP协议区别

终于懂了TCP和UDP协议区别

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

立即提问
相关内容推荐