在python中运行image = image.astype(np.float32)时候发生错误。 5C

AttributeError: 'NoneType' object has no attribute 'astype'请问下这是什么原因呢?有什么解决办法

5个回答

这个就是你得image没有astype这个属性

请问你英语多少级......翻译:属性错误:‘NoneType(无属性)’没有一个属性(


所以,可能是astype时出了问题。但我不是Python程序员,
>  所以 
Astype(Type):返回转换为指定类型的数组的副本。


为image赋值的那个函数没返回值

没有返回值,可以检查一下源或者在调用之前先判断一下是否为空

qq_42584188
@lyp1997 请问下要如何解决呢?因为是一个个文件的函数调用的,找不到图片的文件地址从哪里放进去的
12 个月之前 回复

要把所有前后的代码也放出来啊,这个错误明显是image为空,没有正确读入数据。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
'float' object cannot be interpreted as an integer

在运行代码的时候调用了zeros函数,按理说里面的dtype参数默认也应该是float啊,可是不知道为什么会报错,代码如下: > def _load_representation(self): ''' load user and item latent features generate by MF for every meta-graph ''' #if dt in ['yelp-200k', 'amazon-200k', 'amazon-50k', 'amazon-100k', 'amazon-10k', 'amazon-5k', 'cikm-yelp', 'yelp-50k', 'yelp-10k', 'yelp-5k', 'yelp-100k', 'douban']: fnum = self.N / 2 ufilename = self.data_dir + 'uids.txt' bfilename = self.data_dir + 'bids.txt' uids = [int(l.strip()) for l in open(ufilename, 'r').readlines()] uid2reps = {k:np.zeros(fnum, dtype=np.float64) for k in uids} bids = [int(l.strip()) for l in open(bfilename, 'r').readlines()] bid2reps = {k:np.zeros(fnum, dtype=np.float64) for k in bids} ufiles, vfiles = self._generate_feature_files() feature_dir = self.data_dir + 'mf_features/path_count/' for find, filename in enumerate(ufiles): ufs = np.loadtxt(feature_dir + filename, dtype=np.float64) cur = find * self.F for uf in ufs: uid = int(uf[0]) f = uf[1:] uid2reps[uid][cur:cur+self.F] = f for find, filename in enumerate(vfiles): bfs = np.loadtxt(feature_dir + filename, dtype=np.float64) cur = find * self.F for bf in bfs: bid = int(bf[0]) f = bf[1:] bid2reps[bid][cur:cur+self.F] = f logging.info('load all representations, len(ufiles)=%s, len(vfiles)=%s, ufiles=%s, vfiles=%s', len(ufiles), len(vfiles), '|'.join(ufiles), '|'.join(vfiles)) return uid2reps, bid2reps 有问题的应该是里面这两句: > uids = [int(l.strip()) for l in open(ufilename, 'r').readlines()] uid2reps = {k:np.zeros(fnum, dtype=np.float64) for k in uids} 最前面有导入numpy“import numpy as np”,求大神帮忙看看到底是哪里错了。。一直会提示“TypeError: 'float' object cannot be interpreted as an integer”。。

data,label=read_img(path)报错

def read_img(path): cate=[path+'/'+x for x in os.listdir(path) if os.path.isdir(path+'/'+x)] imgs=[] labels=[] for idx,folder in enumerate(cate): for im in glob.glob(folder+'/*.png'): print('reading the images:%s'%(im)) img=io.imread(im) img=transform.resize(img,(w,h,c)) imgs.append(img) labels.append(idx) return np.asarray(imgs,np.float32),np.asarray(labels,np.int32) data,label=read_img(path) Traceback (most recent call last): File "D:\anaconda3\anaconda file\CnnFaces\train.py", line 64, in <module> data,label=read_img(path) File "D:\anaconda3\anaconda file\CnnFaces\train.py", line 60, in read_img return np.asarray(imgs,np.float32),np.asarray(labels,np.int32) 有两行报错,为什么

pycharm运行mnist_show.py出现如下问题,

# 这是深度学习入门这本书里的一段代码,请问这个问题是什么意思以及怎样解决? 报错如下:(下面有源代码)Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)] on win32 runfile('E:/PycharmProjects/deep-learning-from-scratch-master/ch03/mnist_show.py', wdir='E:/PycharmProjects/deep-learning-from-scratch-master/ch03') Converting train-images-idx3-ubyte.gz to NumPy Array ... Traceback (most recent call last): File "D:\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-eab209ee1d7f>", line 1, in <module> runfile('E:/PycharmProjects/deep-learning-from-scratch-master/ch03/mnist_show.py', wdir='E:/PycharmProjects/deep-learning-from-scratch-master/ch03') File "D:\Program Files\JetBrains\PyCharm 2019.1.1\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "D:\Program Files\JetBrains\PyCharm 2019.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "E:/PycharmProjects/deep-learning-from-scratch-master/ch03/mnist_show.py", line 13, in <module> (x_train, t_train), (x_test, t_test) = load_mnist(flatten=True, normalize=False) File "E:\PycharmProjects\deep-learning-from-scratch-master\dataset\mnist.py", line 106, in load_mnist init_mnist() File "E:\PycharmProjects\deep-learning-from-scratch-master\dataset\mnist.py", line 76, in init_mnist dataset = _convert_numpy() 源代码为:# coding: utf-8 mnist_show.py:::: import sys, os sys.path.append(os.pardir) # 为了导入父目录的文件而进行的设定 import numpy as np from dataset.mnist import load_mnist from PIL import Image def img_show(img): pil_img = Image.fromarray(np.uint8(img)) pil_img.show() (x_train, t_train), (x_test, t_test) = load_mnist(flatten=True, normalize=False) img = x_train[0] label = t_train[0] print(label) # 5 print(img.shape) # (784,) img = img.reshape(28, 28) # 把图像的形状变为原来的尺寸 print(img.shape) # (28, 28) img_show(img) mnist.py::: # coding: utf-8 try: import urllib.request except ImportError: raise ImportError('You should use Python 3.x') import os.path import gzip import pickle import os import numpy as np url_base = 'http://yann.lecun.com/exdb/mnist/' key_file = { 'train_img':'train-images-idx3-ubyte.gz', 'train_label':'train-labels-idx1-ubyte.gz', 'test_img':'t10k-images-idx3-ubyte.gz', 'test_label':'t10k-labels-idx1-ubyte.gz' } dataset_dir = os.path.dirname(os.path.abspath(__file__)) save_file = dataset_dir + "/mnist.pkl" train_num = 60000 test_num = 10000 img_dim = (1, 28, 28) img_size = 784 def _download(file_name): file_path = dataset_dir + "/" + file_name if os.path.exists(file_path): return print("Downloading " + file_name + " ... ") urllib.request.urlretrieve(url_base + file_name, file_path) print("Done") def download_mnist(): for v in key_file.values(): _download(v) def _load_label(file_name): file_path = dataset_dir + "/" + file_name print("Converting " + file_name + " to NumPy Array ...") with gzip.open(file_path, 'rb') as f: labels = np.frombuffer(f.read(), np.uint8, offset=8) print("Done") return labels def _load_img(file_name): file_path = dataset_dir + "/" + file_name print("Converting " + file_name + " to NumPy Array ...") with gzip.open(file_path, 'rb') as f: data = np.frombuffer(f.read(), np.uint8, offset=16) data = data.reshape(-1, img_size) print("Done") return data def _convert_numpy(): dataset = {} dataset['train_img'] = _load_img(key_file['train_img']) dataset['train_label'] = _load_label(key_file['train_label']) dataset['test_img'] = _load_img(key_file['test_img']) dataset['test_label'] = _load_label(key_file['test_label']) return dataset def init_mnist(): download_mnist() dataset = _convert_numpy() print("Creating pickle file ...") with open(save_file, 'wb') as f: pickle.dump(dataset, f, -1) print("Done!") def _change_one_hot_label(X): T = np.zeros((X.size, 10)) for idx, row in enumerate(T): row[X[idx]] = 1 return T def load_mnist(normalize=True, flatten=True, one_hot_label=False): """读入MNIST数据集 Parameters ---------- normalize : 将图像的像素值正规化为0.0~1.0 one_hot_label : one_hot_label为True的情况下,标签作为one-hot数组返回 one-hot数组是指[0,0,1,0,0,0,0,0,0,0]这样的数组 flatten : 是否将图像展开为一维数组 Returns ------- (训练图像, 训练标签), (测试图像, 测试标签) """ if not os.path.exists(save_file): init_mnist() with open(save_file, 'rb') as f: dataset = pickle.load(f) if normalize: for key in ('train_img', 'test_img'): dataset[key] = dataset[key].astype(np.float32) dataset[key] /= 255.0 if one_hot_label: dataset['train_label'] = _change_one_hot_label(dataset['train_label']) dataset['test_label'] = _change_one_hot_label(dataset['test_label']) if not flatten: for key in ('train_img', 'test_img'): dataset[key] = dataset[key].reshape(-1, 1, 28, 28) return (dataset['train_img'], dataset['train_label']), (dataset['test_img'], dataset['test_label']) if __name__ == '__main__': init_mnist()

用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神

用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神 执行语句是: ``` python model_main.py --model_dir=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training --pipeline_config_path=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training/mask_rcnn_inception_v2_coco.config ``` 报错信息如下: ``` WARNING:tensorflow:Forced number of epochs for all eval validations to be 1. WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1. WARNING:tensorflow:Estimator's model_fn (<function create_model_fn.<locals>.model_fn at 0x000001C1EA335C80>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards. Traceback (most recent call last): File "model_main.py", line 109, in <module> tf.app.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run _sys.exit(main(argv)) File "model_main.py", line 105, in main tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0]) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 439, in train_and_evaluate executor.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 518, in run self.run_local() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 650, in run_local hooks=train_hooks) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 363, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 843, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 853, in _train_model_default input_fn, model_fn_lib.ModeKeys.TRAIN)) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 691, in _get_features_and_labels_from_input_fn result = self._call_input_fn(input_fn, mode) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 798, in _call_input_fn return input_fn(**kwargs) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 525, in _train_input_fn batch_size=params['batch_size'] if params else train_config.batch_size) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 149, in build dataset = data_map_fn(process_fn, num_parallel_calls=num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 853, in map return ParallelMapDataset(self, map_func, num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1870, in __init__ super(ParallelMapDataset, self).__init__(input_dataset, map_func) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1839, in __init__ self._map_func.add_to_graph(ops.get_default_graph()) File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 484, in add_to_graph self._create_definition_if_needed() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 319, in _create_definition_if_needed self._create_definition_if_needed_impl() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 336, in _create_definition_if_needed_impl outputs = self._func(*inputs) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1804, in tf_map_func ret = map_func(nested_args) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 130, in process_fn processed_tensors = transform_input_data_fn(processed_tensors) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 515, in transform_and_pad_input_data_fn tensor_dict=transform_data_fn(tensor_dict), File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 129, in transform_input_data tf.expand_dims(tf.to_float(image), axis=0)) File "D:\Tensorflow\tf\models\research\object_detection\meta_architectures\faster_rcnn_meta_arch.py", line 543, in preprocess parallel_iterations=self._parallel_iterations) File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in static_or_dynamic_map_fn outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in <listcomp> outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2264, in resize_to_range lambda: _resize_portrait_image(image)) File "E:\Python3.6\lib\site-packages\tensorflow\python\util\deprecation.py", line 432, in new_func return func(*args, **kwargs) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2063, in cond orig_res_t, res_t = context_t.BuildCondBranch(true_fn) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 1913, in BuildCondBranch original_result = fn() File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2263, in <lambda> lambda: _resize_landscape_image(image), File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2245, in _resize_landscape_image align_corners=align_corners, preserve_aspect_ratio=True) TypeError: resize_images() got an unexpected keyword argument 'preserve_aspect_ratio' ``` 根据提示的最后一句,是说没有一个有效参数 我用的是TensorFlow1.8 python3.6,下载的最新的TensorFlow-models-master

keras验证的所有结果=1.0,啥原因?

keras做图像2分类,结果如下: [[173 0] [ 0 21]] keras的AUC为: 1.0 AUC: 1.0000 ACC: 1.0000 Recall: 1.0000 F1-score: 1.0000 Precesion: 1.0000 代码如下: data = np.load('1.npz') image_data, label_data= data['image'], data['label'] skf = StratifiedKFold(n_splits=3, shuffle=True) for train, test in skf.split(image_data, label_data): train_x=image_data[train] test_x=image_data[test] train_y=label_data[train] test_y=label_data[test] train_x = np.array(train_x) test_x = np.array(test_x) train_x = train_x.reshape(train_x.shape[0],1,28,28) test_x = test_x.reshape(test_x.shape[0],1,28,28) train_x = train_x.astype('float32') test_x = test_x.astype('float32') train_x /=255 test_x /=255 train_y = np.array(train_y) test_y = np.array(test_y) model.compile(optimizer='rmsprop',loss="binary_crossentropy",metrics=["accuracy"]) model.fit(train_x, train_y,batch_size=64,verbose=1) 根据结果判断,肯定是代码哪错的很离谱,请教到底错在哪?

将[] uint8转换为float64

<div class="post-text" itemprop="text"> <p>What is the best way to handle an http <code>resp.Body</code> which is formatted as <code>[]uint8</code> and not as JSON? I would like to convert the bytes into a <code>float64</code>.</p> <p>This is the returned value response:</p> <pre><code>value : %!F([]uint8=[48 46 48 48 49 50 53 53 50 49]) </code></pre> </div>

AttributeError: module 'scipy' has no attribute 'io'

编辑如下代码时, ``` import scipy.io scipy.io.loadmat(image[0][i])['section'], dtype=np.float32) ``` 会报错: ``` AttributeError: module 'scipy' has no attribute 'io' ``` 将代码改为: ``` import scipy.io as sio sio.loadmat(image[0][i])['section'], dtype=np.float32) ``` 又会报另一个错误: ``` TypeError: super(type, obj): obj must be an instance or subtype of type ``` 这是为什么呢?求正确的解决方案。。。(scipy已经降级到1.2.1版本,python是3.7版本)

kears 运行一个最简单的demo报错

按照书上的代码如下: ``` import numpy as np import pandas as pd from keras.utils import np_utils np.random.seed(10) from keras.models import Sequential from keras.layers import Dense from keras.datasets import mnist #数据准备------------------- (x_train_image, y_train_label), \ (x_test_image, y_test_label) = mnist.load_data() x_Train =x_train_image.reshape(60000, 784).astype('float32') x_Test = x_test_image.reshape(10000, 784).astype('float32') #标准化 x_Train_normalize = x_Train/ 255 x_Test_normalize = x_Test/ 255 y_TrainOne_Hot = np_utils.to_categorical(y_train_label) y_TestOne_Hot = np_utils.to_categorical(y_test_label) print(x_Train_normalize) print('sss') print(y_TrainOne_Hot) #建立模型------------------- model = Sequential() model.add(Dense(units=256, input_dim=784, kernel_initializer='normal', activation='relu')) model.add(Dense(units=10, kernel_initializer='normal', activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) train_history =model.fit(x=x_Train_normalize, y=y_TrainOne_Hot,validation_split=0.2, epochs=10, batch_size=200,verbose=2) ``` 报错:softmax() got an unexpected keyword argument 'axis' 详细的报错: Traceback (most recent call last): File "C:/Users/51530/PycharmProjects/ML/keras/K-MNIST/train.py", line 36, in <module> activation='softmax')) File "E:\anaconda\lib\site-packages\keras\models.py", line 522, in add output_tensor = layer(self.outputs[0]) File "E:\anaconda\lib\site-packages\keras\engine\topology.py", line 619, in __call__ output = self.call(inputs, **kwargs) File "E:\anaconda\lib\site-packages\keras\layers\core.py", line 881, in call output = self.activation(output) File "E:\anaconda\lib\site-packages\keras\activations.py", line 29, in softmax return K.softmax(x) File "E:\anaconda\lib\site-packages\keras\backend\tensorflow_backend.py", line 2963, in softmax return tf.nn.softmax(x, axis=axis) TypeError: softmax() got an unexpected keyword argument 'axis' 有知道为什么会报错吗?先谢了

openCV_python自带的ANN进行手写字体识别,报错。求助

![图片说明](https://img-ask.csdn.net/upload/202001/31/1580479207_695592.png)![图片说明](https://img-ask.csdn.net/upload/202001/31/1580479217_497206.png) 我用python3.6按照《OpenCV3计算机视觉》书上代码进行手写字识别,识别率很低,运行时还报了错:OpenCV(3.4.1) Error: Assertion failed ((type == 5 || type == 6) && inputs.cols == layer_sizes[0]) in cv::ml::ANN_MLPImpl::predict, file C:\projects\opencv-python\opencv\modules\ml\src\ann_mlp.cpp, line 411 ``` 具体代码如下:求大佬指点下 import cv2 import numpy as np import digits_ann as ANN def inside(r1, r2): x1, y1, w1, h1 = r1 x2, y2, w2, h2 = r2 if (x1 > x2) and (y1 > y2) and (x1 + w1 < x2 + w2) and (y1 + h1 < y2 + h2): return True else: return False def wrap_digit(rect): x, y, w, h = rect padding = 5 hcenter = x + w / 2 vcenter = y + h / 2 if (h > w): w = h x = hcenter - (w / 2) else: h = w y = vcenter - (h / 2) return (int(x - padding), int(y - padding), int(w + padding), int(h + padding)) ''' 注意:首次测试时,建议将使用完整的训练数据集,且进行多次迭代,直到收敛 如:ann, test_data = ANN.train(ANN.create_ANN(100), 50000, 30) ''' ann, test_data = ANN.train(ANN.create_ANN(10), 50000, 1) # 调用所需识别的图片,并处理 path = "C:\\Users\\64601\\PycharmProjects\Ann\\images\\numbers.jpg" img = cv2.imread(path, cv2.IMREAD_UNCHANGED) bw = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) bw = cv2.GaussianBlur(bw, (7, 7), 0) ret, thbw = cv2.threshold(bw, 127, 255, cv2.THRESH_BINARY_INV) thbw = cv2.erode(thbw, np.ones((2, 2), np.uint8), iterations=2) image, cntrs, hier = cv2.findContours(thbw.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) rectangles = [] for c in cntrs: r = x, y, w, h = cv2.boundingRect(c) a = cv2.contourArea(c) b = (img.shape[0] - 3) * (img.shape[1] - 3) is_inside = False for q in rectangles: if inside(r, q): is_inside = True break if not is_inside: if not a == b: rectangles.append(r) for r in rectangles: x, y, w, h = wrap_digit(r) cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2) roi = thbw[y:y + h, x:x + w] try: digit_class = ANN.predict(ann, roi)[0] except: print("except") continue cv2.putText(img, "%d" % digit_class, (x, y - 1), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0)) cv2.imshow("thbw", thbw) cv2.imshow("contours", img) cv2.waitKey() cv2.destroyAllWindows() ####### import cv2 import pickle import numpy as np import gzip """OpenCV ANN Handwritten digit recognition example Wraps OpenCV's own ANN by automating the loading of data and supplying default paramters, such as 20 hidden layers, 10000 samples and 1 training epoch. The load data code is taken from http://neuralnetworksanddeeplearning.com/chap1.html by Michael Nielsen """ def vectorized_result(j): e = np.zeros((10, 1)) e[j] = 1.0 return e def load_data(): with gzip.open('C:\\Users\\64601\\PycharmProjects\\Ann\\mnist.pkl.gz') as fp: # 注意版本不同,需要添加传入第二个参数encoding='bytes',否则出现编码错误 training_data, valid_data, test_data = pickle.load(fp, encoding='bytes') fp.close() return (training_data, valid_data, test_data) def wrap_data(): # tr_d数组长度为50000,va_d数组长度为10000,te_d数组长度为10000 tr_d, va_d, te_d = load_data() # 训练数据集 training_inputs = [np.reshape(x, (784, 1)) for x in tr_d[0]] training_results = [vectorized_result(y) for y in tr_d[1]] training_data = list(zip(training_inputs, training_results)) # 校验数据集 validation_inputs = [np.reshape(x, (784, 1)) for x in va_d[0]] validation_data = list(zip(validation_inputs, va_d[1])) # 测试数据集 test_inputs = [np.reshape(x, (784, 1)) for x in te_d[0]] test_data = list(zip(test_inputs, te_d[1])) return (training_data, validation_data, test_data) def create_ANN(hidden=20): ann = cv2.ml.ANN_MLP_create() # 建立模型 ann.setTrainMethod(cv2.ml.ANN_MLP_RPROP | cv2.ml.ANN_MLP_UPDATE_WEIGHTS) # 设置训练方式为反向传播 ann.setActivationFunction( cv2.ml.ANN_MLP_SIGMOID_SYM) # 设置激活函数为SIGMOID,还有cv2.ml.ANN_MLP_IDENTITY,cv2.ml.ANNMLP_GAUSSIAN ann.setLayerSizes(np.array([784, hidden, 10])) # 设置层数,输入784层,输出层10 ann.setTermCriteria((cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 100, 0.1)) # 设置终止条件 return ann def train(ann, samples=10000, epochs=1): # tr:训练数据集; val:校验数据集; test:测试数据集; tr, val, test = wrap_data() for x in range(epochs): counter = 0 for img in tr: if (counter > samples): break if (counter % 1000 == 0): print("Epoch %d: Trained %d/%d" % (x, counter, samples)) counter += 1 data, digit = img ann.train(np.array([data.ravel()], dtype=np.float32), cv2.ml.ROW_SAMPLE, np.array([digit.ravel()], dtype=np.float32)) print("Epoch %d complete" % x) return ann, test def predict(ann, sample): resized = sample.copy() rows, cols = resized.shape if rows != 28 and cols != 28 and rows * cols > 0: resized = cv2.resize(resized, (28, 28), interpolation=cv2.INTER_CUBIC) return ann.predict(np.array([resized.ravel()], dtype=np.float32)) ```

最近在学习opencv的内容,然后在argparse上遇到了需要参数的报错

我最近刚接触openCV的内容,在win10的pycharm里面试着去运行相关的程序,但是遇到了报错,可能问题很小白,希望各位大牛不吝赐教。其内容是:deep-learning-object-detection.py: error: the following arguments are required: 全篇代码如下 ``` # USAGE # python deep_learning_object_detection.py --image images/example_01.jpg \ # --prototxt MobileNetSSD_deploy.prototxt.txt --model MobileNetSSD_deploy.caffemodel # import the necessary packages import numpy as np import argparse import cv2 ap = argparse.ArgumentParser() ap.add_argument("-i", r"--C:\Users\52314\Desktop\deep\images\example_01.jpg", required=True, help="path to input image") ap.add_argument("-p", r"--C:\Users\52314\Desktop\deepMobileNetSSD_deploy.prototxt.txt", required=True, help="path to Caffe 'deploy' prototxt file") ap.add_argument("-m", r"--C:\Users\52314\Desktop\deep\deep_learning_object_detection.py", required=True, help="path to Caffe pre-trained model") ap.add_argument("-c", "--confidence", type=float, default=0.2, help="minimum probability to filter weak detections") args = vars(ap.parse_args()) # initialize the list of class labels MobileNet SSD was trained to # detect, then generate a set of bounding box colors for each class CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3)) # load our serialized model from disk print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe(Args[prototxt], Args[model]) # load the input image and construct an input blob for the image # by resizing to a fixed 300x300 pixels and then normalizing it # (note: normalization is done via the authors of the MobileNet SSD # implementation) image = cv2.imread(Args["image"]) (h, w) = image.shape[:2] blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 0.007843, (300, 300), 127.5) # pass the blob through the network and obtain the detections and # predictions print("[INFO] computing object detections...") net.setInput(blob) detections = net.forward() # loop over the detections for i in np.arange(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with the # prediction confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the `confidence` is # greater than the minimum confidence if confidence > Args["confidence"]: # extract the index of the class label from the `detections`, # then compute the (x, y)-coordinates of the bounding box for # the object idx = int(detections[0, 0, i, 1]) box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # display the prediction label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100) print("[INFO] {}".format(label)) cv2.rectangle(image, (startX, startY), (endX, endY), COLORS[idx], 2) y = startY - 15 if startY - 15 > 15 else startY + 15 cv2.putText(image, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2) # show the output image cv2.imshow("Output", image) cv2.waitKey(0) ``` 我在网上看到说argparse在win10上面兼容不好所以换了个表达方式,那个也不行,那么到底是什么问题呢?要如何解决这个问题呢?非常感谢!

pyqt5中如何通过OpenCV读取一帧图像喂入网络呢?

我想通过pyqt5制作一个UI界面封装google object detection api的示例代码,源代码中是识别单张图片,我想通过摄像头输入一帧的图像然后进行识别显示。整个程序如下: ``` # coding:utf-8 ''' V3.0A版本,尝试实现摄像头识别 ''' import numpy as np import cv2 import os import os.path import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile import pylab from distutils.version import StrictVersion from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.QtWidgets import * from PyQt5.QtCore import * from PyQt5.QtGui import * class UiForm(): openfile_name_pb = '' openfile_name_pbtxt = '' openpic_name = '' num_class = 0 def setupUi(self, Form): Form.setObjectName("Form") Form.resize(600, 690) Form.setMinimumSize(QtCore.QSize(600, 690)) Form.setMaximumSize(QtCore.QSize(600, 690)) self.frame = QtWidgets.QFrame(Form) self.frame.setGeometry(QtCore.QRect(20, 20, 550, 100)) self.frame.setFrameShape(QtWidgets.QFrame.StyledPanel) self.frame.setFrameShadow(QtWidgets.QFrame.Raised) self.frame.setObjectName("frame") self.horizontalLayout_2 = QtWidgets.QHBoxLayout(self.frame) self.horizontalLayout_2.setObjectName("horizontalLayout_2") # 加载模型文件按钮 self.btn_add_file = QtWidgets.QPushButton(self.frame) self.btn_add_file.setObjectName("btn_add_file") self.horizontalLayout_2.addWidget(self.btn_add_file) # 加载pbtxt文件按钮 self.btn_add_pbtxt = QtWidgets.QPushButton(self.frame) self.btn_add_pbtxt.setObjectName("btn_add_pbtxt") self.horizontalLayout_2.addWidget(self.btn_add_pbtxt) # 输入检测类别数目按钮 self.btn_enter = QtWidgets.QPushButton(self.frame) self.btn_enter.setObjectName("btn_enter") self.horizontalLayout_2.addWidget(self.btn_enter) # 打开摄像头 self.btn_opencam = QtWidgets.QPushButton(self.frame) self.btn_opencam.setObjectName("btn_objdec") self.horizontalLayout_2.addWidget(self.btn_opencam) # 开始识别按钮 self.btn_objdec = QtWidgets.QPushButton(self.frame) self.btn_objdec.setObjectName("btn_objdec") self.horizontalLayout_2.addWidget(self.btn_objdec) # 退出按钮 self.btn_exit = QtWidgets.QPushButton(self.frame) self.btn_exit.setObjectName("btn_exit") self.horizontalLayout_2.addWidget(self.btn_exit) # 显示识别后的画面 self.lab_rawimg_show = QtWidgets.QLabel(Form) self.lab_rawimg_show.setGeometry(QtCore.QRect(50, 140, 500, 500)) self.lab_rawimg_show.setMinimumSize(QtCore.QSize(500, 500)) self.lab_rawimg_show.setMaximumSize(QtCore.QSize(500, 500)) self.lab_rawimg_show.setObjectName("lab_rawimg_show") self.lab_rawimg_show.setStyleSheet(("border:2px solid red")) self.retranslateUi(Form) # 这里将按钮和定义的动作相连,通过click信号连接openfile槽? self.btn_add_file.clicked.connect(self.openpb) # 用于打开pbtxt文件 self.btn_add_pbtxt.clicked.connect(self.openpbtxt) # 用于用户输入类别数 self.btn_enter.clicked.connect(self.enter_num_cls) # 打开摄像头 self.btn_opencam.clicked.connect(self.opencam) # 开始识别 # ~ self.btn_objdec.clicked.connect(self.object_detection) # 这里是将btn_exit按钮和Form窗口相连,点击按钮发送关闭窗口命令 self.btn_exit.clicked.connect(Form.close) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): _translate = QtCore.QCoreApplication.translate Form.setWindowTitle(_translate("Form", "目标检测")) self.btn_add_file.setText(_translate("Form", "加载模型文件")) self.btn_add_pbtxt.setText(_translate("Form", "加载pbtxt文件")) self.btn_enter.setText(_translate("From", "指定识别类别数")) self.btn_opencam.setText(_translate("Form", "打开摄像头")) self.btn_objdec.setText(_translate("From", "开始识别")) self.btn_exit.setText(_translate("Form", "退出")) self.lab_rawimg_show.setText(_translate("Form", "识别效果")) def openpb(self): global openfile_name_pb openfile_name_pb, _ = QFileDialog.getOpenFileName(self.btn_add_file,'选择pb文件','/home/kanghao/','pb_files(*.pb)') print('加载模型文件地址为:' + str(openfile_name_pb)) def openpbtxt(self): global openfile_name_pbtxt openfile_name_pbtxt, _ = QFileDialog.getOpenFileName(self.btn_add_pbtxt,'选择pbtxt文件','/home/kanghao/','pbtxt_files(*.pbtxt)') print('加载标签文件地址为:' + str(openfile_name_pbtxt)) def opencam(self): self.camcapture = cv2.VideoCapture(0) self.timer = QtCore.QTimer() self.timer.start() self.timer.setInterval(100) # 0.1s刷新一次 self.timer.timeout.connect(self.camshow) def camshow(self): global camimg _ , camimg = self.camcapture.read() print(_) camimg = cv2.resize(camimg, (512, 512)) camimg = cv2.cvtColor(camimg, cv2.COLOR_BGR2RGB) print(type(camimg)) #strcamimg = camimg.tostring() showImage = QtGui.QImage(camimg.data, camimg.shape[1], camimg.shape[0], QtGui.QImage.Format_RGB888) self.lab_rawimg_show.setPixmap(QtGui.QPixmap.fromImage(showImage)) def enter_num_cls(self): global num_class num_class, okPressed = QInputDialog.getInt(self.btn_enter,'指定训练类别数','你的目标有多少类?',1,1,28,1) if okPressed: print('识别目标总类为:' + str(num_class)) def img2pixmap(self, image): Y, X = image.shape[:2] self._bgra = np.zeros((Y, X, 4), dtype=np.uint8, order='C') self._bgra[..., 0] = image[..., 2] self._bgra[..., 1] = image[..., 1] self._bgra[..., 2] = image[..., 0] qimage = QtGui.QImage(self._bgra.data, X, Y, QtGui.QImage.Format_RGB32) pixmap = QtGui.QPixmap.fromImage(qimage) return pixmap def object_detection(self): sys.path.append("..") from object_detection.utils import ops as utils_ops if StrictVersion(tf.__version__) < StrictVersion('1.9.0'): raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!') from utils import label_map_util from utils import visualization_utils as vis_util # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_FROZEN_GRAPH = openfile_name_pb # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = openfile_name_pbtxt NUM_CLASSES = num_class detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True) def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. TEST_IMAGE_PATHS = camimg print(TEST_IMAGE_PATHS) # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # Get handles to input and output tensors ops = tf.get_default_graph().get_operations() all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' if tensor_name in all_tensor_names: tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[0], image.shape[1]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # Run inference output_dict = sess.run(tensor_dict, feed_dict={image_tensor: np.expand_dims(image, 0)}) # all outputs are float32 numpy arrays, so convert types as appropriate output_dict['num_detections'] = int(output_dict['num_detections'][0]) output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) output_dict['detection_boxes'] = output_dict['detection_boxes'][0] output_dict['detection_scores'] = output_dict['detection_scores'][0] if 'detection_masks' in output_dict: output_dict['detection_masks'] = output_dict['detection_masks'][0] return output_dict #image = Image.open(TEST_IMAGE_PATHS) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(TEST_IMAGE_PATHS) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # Actual detection. output_dict = run_inference_for_single_image(image_np, detection_graph) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, instance_masks=output_dict.get('detection_masks'), use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) #plt.savefig(str(TEST_IMAGE_PATHS)+".jpg") ## 用于显示ui界面的命令 if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) Window = QtWidgets.QWidget() # ui为根据类Ui_From()创建的实例 ui = UiForm() ui.setupUi(Window) Window.show() sys.exit(app.exec_()) ``` 但是运行提示: ![图片说明](https://img-ask.csdn.net/upload/201811/30/1543567054_511116.png) 求助

在网上找到一个DQN的神经网络代码。可以运行,但是没有读取模型的部分

代码可以运行,但是没用读取模型的代码,我在网上找了一段时间,还是没有找到教程。自己写的读写代码不能正常工作 这是原代码 ``` import pygame import random from pygame.locals import * import numpy as np from collections import deque import tensorflow as tf import cv2 BLACK = (0 ,0 ,0 ) WHITE = (255,255,255) SCREEN_SIZE = [320,400] BAR_SIZE = [50, 5] BALL_SIZE = [15, 15] # 神经网络的输出 MOVE_STAY = [1, 0, 0,0] MOVE_LEFT = [0, 1, 0,0] MOVE_RIGHT = [0, 0, 1,0] MOVE_RIGHT1=[0,0,0,1] class Game(object): def __init__(self): pygame.init() self.clock = pygame.time.Clock() self.screen = pygame.display.set_mode(SCREEN_SIZE) pygame.display.set_caption('Simple Game') self.ball_pos_x = SCREEN_SIZE[0]//2 - BALL_SIZE[0]/2 self.ball_pos_y = SCREEN_SIZE[1]//2 - BALL_SIZE[1]/2 self.ball_dir_x = -1 # -1 = left 1 = right self.ball_dir_y = -1 # -1 = up 1 = down self.ball_pos = pygame.Rect(self.ball_pos_x, self.ball_pos_y, BALL_SIZE[0], BALL_SIZE[1]) self.bar_pos_x = SCREEN_SIZE[0]//2-BAR_SIZE[0]//2 self.bar_pos = pygame.Rect(self.bar_pos_x, SCREEN_SIZE[1]-BAR_SIZE[1], BAR_SIZE[0], BAR_SIZE[1]) # action是MOVE_STAY、MOVE_LEFT、MOVE_RIGHT # ai控制棒子左右移动;返回游戏界面像素数和对应的奖励。(像素->奖励->强化棒子往奖励高的方向移动) def step(self, action): if action == MOVE_LEFT: self.bar_pos_x = self.bar_pos_x - 2 elif action == MOVE_RIGHT: self.bar_pos_x = self.bar_pos_x + 2 elif action == MOVE_RIGHT1: self.bar_pos_x = self.bar_pos_x + 1 else: pass if self.bar_pos_x < 0: self.bar_pos_x = 0 if self.bar_pos_x > SCREEN_SIZE[0] - BAR_SIZE[0]: self.bar_pos_x = SCREEN_SIZE[0] - BAR_SIZE[0] self.screen.fill(BLACK) self.bar_pos.left = self.bar_pos_x pygame.draw.rect(self.screen, WHITE, self.bar_pos) self.ball_pos.left += self.ball_dir_x * 2 self.ball_pos.bottom += self.ball_dir_y * 3 pygame.draw.rect(self.screen, WHITE, self.ball_pos) if self.ball_pos.top <= 0 or self.ball_pos.bottom >= (SCREEN_SIZE[1] - BAR_SIZE[1]+1): self.ball_dir_y = self.ball_dir_y * -1 if self.ball_pos.left <= 0 or self.ball_pos.right >= (SCREEN_SIZE[0]): self.ball_dir_x = self.ball_dir_x * -1 reward = 0 if self.bar_pos.top <= self.ball_pos.bottom and (self.bar_pos.left < self.ball_pos.right and self.bar_pos.right > self.ball_pos.left): reward = 1 # 击中奖励 elif self.bar_pos.top <= self.ball_pos.bottom and (self.bar_pos.left > self.ball_pos.right or self.bar_pos.right < self.ball_pos.left): reward = -1 # 没击中惩罚 # 获得游戏界面像素 screen_image = pygame.surfarray.array3d(pygame.display.get_surface()) #np.save(r'C:\Users\Administrator\Desktop\game\model\112454.npy',screen_image) pygame.display.update() # 返回游戏界面像素和对应的奖励 return reward, screen_image # learning_rate LEARNING_RATE = 0.99 # 更新梯度 INITIAL_EPSILON = 1.0 FINAL_EPSILON = 0.05 # 测试观测次数 EXPLORE = 500000 OBSERVE = 50000 # 存储过往经验大小 REPLAY_MEMORY = 500000 BATCH = 100 output = 4 # 输出层神经元数。代表3种操作-MOVE_STAY:[1, 0, 0] MOVE_LEFT:[0, 1, 0] MOVE_RIGHT:[0, 0, 1] input_image = tf.placeholder("float", [None, 80, 100, 4]) # 游戏像素 action = tf.placeholder("float", [None, output]) # 操作 # 定义CNN-卷积神经网络 参考:http://blog.topspeedsnail.com/archives/10451 def convolutional_neural_network(input_image): weights = {'w_conv1':tf.Variable(tf.zeros([8, 8, 4, 32])), 'w_conv2':tf.Variable(tf.zeros([4, 4, 32, 64])), 'w_conv3':tf.Variable(tf.zeros([3, 3, 64, 64])), 'w_fc4':tf.Variable(tf.zeros([3456, 784])), 'w_out':tf.Variable(tf.zeros([784, output]))} biases = {'b_conv1':tf.Variable(tf.zeros([32])), 'b_conv2':tf.Variable(tf.zeros([64])), 'b_conv3':tf.Variable(tf.zeros([64])), 'b_fc4':tf.Variable(tf.zeros([784])), 'b_out':tf.Variable(tf.zeros([output]))} conv1 = tf.nn.relu(tf.nn.conv2d(input_image, weights['w_conv1'], strides = [1, 4, 4, 1], padding = "VALID") + biases['b_conv1']) conv2 = tf.nn.relu(tf.nn.conv2d(conv1, weights['w_conv2'], strides = [1, 2, 2, 1], padding = "VALID") + biases['b_conv2']) conv3 = tf.nn.relu(tf.nn.conv2d(conv2, weights['w_conv3'], strides = [1, 1, 1, 1], padding = "VALID") + biases['b_conv3']) conv3_flat = tf.reshape(conv3, [-1, 3456]) fc4 = tf.nn.relu(tf.matmul(conv3_flat, weights['w_fc4']) + biases['b_fc4']) output_layer = tf.matmul(fc4, weights['w_out']) + biases['b_out'] return output_layer # 深度强化学习入门: https://www.nervanasys.com/demystifying-deep-reinforcement-learning/ # 训练神经网络 def train_neural_network(input_image): predict_action = convolutional_neural_network(input_image) argmax = tf.placeholder("float", [None, output]) gt = tf.placeholder("float", [None]) action = tf.reduce_sum(tf.multiply(predict_action, argmax), reduction_indices = 1) cost = tf.reduce_mean(tf.square(action - gt)) optimizer = tf.train.AdamOptimizer(1e-6).minimize(cost) game = Game() D = deque() _, image = game.step(MOVE_STAY) # 转换为灰度值 image = cv2.cvtColor(cv2.resize(image, (100, 80)), cv2.COLOR_BGR2GRAY) # 转换为二值 ret, image = cv2.threshold(image, 1, 255, cv2.THRESH_BINARY) input_image_data = np.stack((image, image, image, image), axis = 2) with tf.Session() as sess: sess.run(tf.initialize_all_variables()) saver = tf.train.Saver() n = 0 epsilon = INITIAL_EPSILON while True: action_t = predict_action.eval(feed_dict = {input_image : [input_image_data]})[0] argmax_t = np.zeros([output], dtype=np.int) if(random.random() <= INITIAL_EPSILON): maxIndex = random.randrange(output) else: maxIndex = np.argmax(action_t) argmax_t[maxIndex] = 1 if epsilon > FINAL_EPSILON: epsilon -= (INITIAL_EPSILON - FINAL_EPSILON) / EXPLORE for event in pygame.event.get(): #macOS需要事件循环,否则白屏 if event.type == QUIT: pygame.quit() sys.exit() reward, image = game.step(list(argmax_t)) image = cv2.cvtColor(cv2.resize(image, (100, 80)), cv2.COLOR_BGR2GRAY) ret, image = cv2.threshold(image, 1, 255, cv2.THRESH_BINARY) image = np.reshape(image, (80, 100, 1)) input_image_data1 = np.append(image, input_image_data[:, :, 0:3], axis = 2) D.append((input_image_data, argmax_t, reward, input_image_data1)) if len(D) > REPLAY_MEMORY: D.popleft() if n > OBSERVE: minibatch = random.sample(D, BATCH) input_image_data_batch = [d[0] for d in minibatch] argmax_batch = [d[1] for d in minibatch] reward_batch = [d[2] for d in minibatch] input_image_data1_batch = [d[3] for d in minibatch] gt_batch = [] out_batch = predict_action.eval(feed_dict = {input_image : input_image_data1_batch}) for i in range(0, len(minibatch)): gt_batch.append(reward_batch[i] + LEARNING_RATE * np.max(out_batch[i])) optimizer.run(feed_dict = {gt : gt_batch, argmax : argmax_batch, input_image : input_image_data_batch}) input_image_data = input_image_data1 n = n+1 if n % 100 == 0: saver.save(sess, 'D:/lolAI/model/game', global_step = n) # 保存模型 print(n, "epsilon:", epsilon, " " ,"action:", maxIndex, " " ,"reward:", reward) train_neural_network(input_image) ``` 这是我根据教程写的读取模型并且运行的代码 ``` import tensorflow as tf tf.reset_default_graph() with tf.Session() as sess: new_saver = tf.train.import_meta_graph('D:/lolAI/model/game-400.meta') new_saver.restore(sess, tf.train.latest_checkpoint('D:/lolAI/model')) print(sess.run(tf.initialize_all_variables())) ``` 代码我还没有看的很明白,希望大佬给点意见

关于python人脸识别库的数据类型问题

``` import face_recognition import cv2 import os def file_name(dir): names = os.listdir(dir) i=0 for name in names: index = name.rfind('.') name = name[:index] names[i]=name i=i+1 return names def file_list(dir): list_name=os.listdir(dir) return list_name video_capture = cv2.VideoCapture(0) face_dir="E:\\face" names1=file_name(face_dir) root=file_list(face_dir) for name1 in names1: image = face_recognition.load_image_file("E:\\face\\"+name1+".jpg") name1 = face_recognition.face_encodings(image)[0] # name1 = name1.astype('float64') # Create arrays of known face encodings and their names known_face_encodings = names1 known_face_names = names1 print(known_face_encodings) # Initialize some variables face_locations = [] face_encodings = [] face_names = [] process_this_frame = True while True: # Grab a single frame of video ret, frame = video_capture.read() # Resize frame of video to 1/4 size for faster face recognition processing small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25) # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses) rgb_small_frame = small_frame[:, :, ::-1] # Only process every other frame of video to save time if process_this_frame: # Find all the faces and face encodings in the current frame of video face_locations = face_recognition.face_locations(rgb_small_frame) face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations) face_names = [] for face_encoding in face_encodings: # See if the face is a match for the known face(s) #face_encoding = face_encoding.astype('float64') matches = face_recognition.compare_faces(known_face_encodings, face_encoding) name = "Unknown" print(matches) # If a match was found in known_face_encodings, just use the first one. if True in matches: first_match_index = matches.index(True) name = known_face_names[first_match_index] print(first_match_index) face_names.append(name) process_this_frame = not process_this_frame # Display the results for (top, right, bottom, left), name in zip(face_locations, face_names): # Scale back up face locations since the frame we detected in was scaled to 1/4 size top *= 4 right *= 4 bottom *= 4 left *= 4 # Draw a box around the face cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2) # Draw a label with a name below the face cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED) font = cv2.FONT_HERSHEY_DUPLEX cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1) # Display the resulting image cv2.imshow('Video', frame) # Hit 'q' on the keyboard to quit! if cv2.waitKey(1) & 0xFF == ord('q'): break # Release handle to the webcam video_capture.release() cv2.destroyAllWindows() ``` 总是提示: return np.linalg.norm(face_encodings - face_to_compare, axis=1) TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('S32') dtype('S32') dtype('S32') 这是什么鬼,转换了数据类型也没有用???

急,跪求pycharm跑yolov3-train.py报错

![图片说明](https://img-ask.csdn.net/upload/201905/23/1558616226_449733.png) ``` import numpy as np import keras.backend as K from keras.layers import Input, Lambda from keras.models import Model from keras.callbacks import TensorBoard, ModelCheckpoint, EarlyStopping from yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss from yolo3.utils import get_random_data def _main(): annotation_path = 'train.txt' log_dir = 'logs/000/' classes_path = 'model_data/voc_classes.txt' anchors_path = 'model_data/yolo_anchors.txt' class_names = get_classes(classes_path) anchors = get_anchors(anchors_path) input_shape = (416,416) # multiple of 32, hw model = create_model(input_shape, anchors, len(class_names) ) train(model, annotation_path, input_shape, anchors, len(class_names), log_dir=log_dir) def train(model, annotation_path, input_shape, anchors, num_classes, log_dir='logs/'): model.compile(optimizer='adam', loss={ 'yolo_loss': lambda y_true, y_pred: y_pred}) logging = TensorBoard(log_dir=log_dir) checkpoint = ModelCheckpoint(log_dir + "ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5", monitor='val_loss', save_weights_only=True, save_best_only=True, period=1) batch_size = 8 val_split = 0.1 with open(annotation_path) as f: lines = f.readlines() np.random.shuffle(lines) num_val = int(len(lines)*val_split) num_train = len(lines) - num_val print('Train on {} samples, val on {} samples, with batch size {}.'.format(num_train, num_val, batch_size)) model.fit_generator ( data_generator_wrapper ( lines[:num_train] , batch_size , input_shape , anchors , num_classes ) , steps_per_epoch=max ( 1 , num_train // batch_size ) , validation_data=data_generator_wrapper ( lines[num_train:] , batch_size , input_shape , anchors , num_classes ) , validation_steps=max ( 1 , num_val // batch_size ) , epochs=10 , initial_epoch=0 , callbacks=[logging , checkpoint] ) model.save_weights(log_dir + 'trained_weights.h5') def get_classes(classes_path): with open(classes_path) as f: class_names = f.readlines() class_names = [c.strip() for c in class_names] return class_names def get_anchors(anchors_path): with open(anchors_path) as f: anchors = f.readline() anchors = [float(x) for x in anchors.split(',')] return np.array(anchors).reshape(-1, 2) def create_model(input_shape, anchors, num_classes, load_pretrained=False, freeze_body=False, weights_path='model_data/yolo_weights.h5'): K.clear_session() # get a new session h, w = input_shape image_input = Input(shape=(w, h, 3)) num_anchors = len(anchors) y_true = [Input(shape=(h//{0:32, 1:16, 2:8}[l], w//{0:32, 1:16, 2:8}[l], num_anchors//3, num_classes+5)) for l in range(3)] model_body = yolo_body(image_input, num_anchors//3, num_classes) print('Create YOLOv3 model with {} anchors and {} classes.'.format(num_anchors, num_classes)) if load_pretrained: model_body.load_weights(weights_path, by_name=True, skip_mismatch=True) print('Load weights {}.'.format(weights_path)) if freeze_body in [1, 2]: # Do not freeze 3 output layers. num = (185 , len ( model_body.layers ) - 3)[freeze_body - 1] for i in range(num): model_body.layers[i].trainable = False print('Freeze the first {} layers of total {} layers.'.format(num, len(model_body.layers))) model_loss = Lambda ( yolo_loss , output_shape=(1 ,) , name='yolo_loss', arguments={'anchors': anchors, 'num_classes': num_classes, 'ignore_thresh': 0.5} )(model_body.output + y_true) model = Model(inputs=[model_body.input] + y_true, outputs=model_loss) return model def data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes): n = len(annotation_lines) i = 0 while True: image_data = [] box_data = [] for b in range(batch_size): if i==0: np.random.shuffle(annotation_lines) image, box = get_random_data(annotation_lines[i], input_shape, random=True) image_data.append(image) box_data.append(box) i = (i+1) % n image_data = np.array(image_data) box_data = np.array(box_data) y_true = preprocess_true_boxes(box_data, input_shape, anchors, num_classes) yield [image_data]+y_true, np.zeros(batch_size) def data_generator_wrapper(annotation_lines, batch_size, input_shape, anchors, num_classes): n = len(annotation_lines) if n==0 or batch_size<=0: return None return data_generator(annotation_lines, batch_size, input_shape, anchors, num_classes) if __name__ == '__main__': _main() ``` 报了一个:tensorflow.python.framework.errors_impl.InvalidArgumentError: Inputs to operation training/Adam/gradients/AddN_24 of type _MklAddN must have the same size and shape. Input 0: [2768896] != input 1: [8,26,26,512] [[Node: training/Adam/gradients/AddN_24 = _MklAddN[N=2, T=DT_FLOAT, _kernel="MklOp", _device="/job:localhost/replica:0/task:0/device:CPU:0"](training/Adam/gradients/batch_normalization_65/FusedBatchNorm_grad/FusedBatchNormGrad, training/Adam/gradients/batch_n

树莓派4B python3.7训练keras模型失败?

1.我在树莓派4B用keras训练模型时一直失败。[图片说明](https://img-ask.csdn.net/upload/202005/22/1590078114_273231.jpg) 2.在树莓派3B 上python3.5.3训练就没问题! 3.这是我在网上找的一个叫 ms-agv-car-master 的图像识别包。这是地址:https://github.com/jerry73204/ms-agv-car ``` import os import glob import argparse import cv2 import numpy as np from keras.models import Model from keras.layers import Dense, Activation, MaxPool2D, Conv2D, Flatten, Dropout, Input, BatchNormalization, Add from keras.optimizers import Adam from keras.utils import multi_gpu_model, plot_model # Keras 內建模型 # https://keras.io/applications from keras.applications.vgg16 import VGG16 from keras.applications.vgg19 import VGG19 from keras.applications.resnet50 import ResNet50 from keras.applications.densenet import DenseNet121 from keras.applications.mobilenetv2 import MobileNetV2 def custom_model(input_shape, n_classes): def conv_block(x, filters): x = BatchNormalization() (x) x = Conv2D(filters, (3, 3), activation='relu', padding='same') (x) x = BatchNormalization() (x) shortcut = x x = Conv2D(filters, (3, 3), activation='relu', padding='same') (x) x = Add() ([x, shortcut]) x = MaxPool2D((2, 2), strides=(2, 2)) (x) return x input_tensor = Input(shape=input_shape) x = conv_block(input_tensor, 32) x = conv_block(x, 64) x = conv_block(x, 128) x = conv_block(x, 256) x = conv_block(x, 512) x = Flatten() (x) x = BatchNormalization() (x) x = Dense(512, activation='relu') (x) x = Dense(512, activation='relu') (x) output_layer = Dense(n_classes, activation='softmax') (x) inputs = [input_tensor] model = Model(inputs, output_layer) return model def main(): # 定義程式參數 arg_parser = argparse.ArgumentParser(description='模型訓練範例') arg_parser.add_argument( '--model-file', required=True, help='模型描述檔', ) arg_parser.add_argument( '--weights-file', required=True, help='模型參數檔案', ) arg_parser.add_argument( '--data-dir', required=True, help='資料目錄', ) arg_parser.add_argument( '--model-type', choices=('VGG16', 'VGG19', 'ResNet50', 'DenseNet121', 'MobileNetV2', 'custom'), default='custom', help='選擇模型類別', ) arg_parser.add_argument( '--epochs', type=int, default=32, help='訓練回合數', ) arg_parser.add_argument( '--output-file', default='-', help='預測輸出檔案', ) arg_parser.add_argument( '--input-width', type=int, default=48, help='模型輸入寬度', ) arg_parser.add_argument( '--input-height', type=int, default=48, help='模型輸入高度', ) arg_parser.add_argument( '--load-weights', action='store_true', help='從 --weights-file 指定的檔案載入模型參數', ) arg_parser.add_argument( '--num-gpu', type=int, default=1, help='使用的GPU數量,預設為1', ) arg_parser.add_argument( '--plot-model', help='繪製模型架構圖', ) args = arg_parser.parse_args() # 資料參數 input_height = args.input_height input_width = args.input_width input_channel = 3 input_shape = (input_height, input_width, input_channel) n_classes = 4 # 建立模型 if args.model_type == 'VGG16': input_tensor = Input(shape=input_shape) model = VGG16( input_shape=input_shape, classes=n_classes, weights=None, input_tensor=input_tensor, ) elif args.model_type == 'VGG19': input_tensor = Input(shape=input_shape) model = VGG19( input_shape=input_shape, classes=n_classes, weights=None, input_tensor=input_tensor, ) elif args.model_type == 'ResNet50': input_tensor = Input(shape=input_shape) model = ResNet50( input_shape=input_shape, classes=n_classes, weights=None, input_tensor=input_tensor, ) elif args.model_type == 'DenseNet121': input_tensor = Input(shape=input_shape) model = DenseNet121( input_shape=input_shape, classes=n_classes, weights=None, input_tensor=input_tensor, ) elif args.model_type == 'MobileNetV2': input_tensor = Input(shape=input_shape) model = MobileNetV2( input_shape=input_shape, classes=n_classes, weights=None, input_tensor=input_tensor, ) elif args.model_type == 'custom': model = custom_model(input_shape, n_classes) if args.num_gpu > 1: model = multi_gpu_model(model, gpus=args.num_gpu) if args.plot_model is not None: plot_model(model, to_file=args.plot_model) adam = Adam() model.compile( optimizer=adam, loss='categorical_crossentropy', metrics=['acc'], ) # 搜尋所有圖檔 match_left = os.path.join(args.data_dir, 'left', '*.jpg') paths_left = glob.glob(match_left) match_right = os.path.join(args.data_dir, 'right', '*.jpg') paths_right = glob.glob(match_right) match_stop = os.path.join(args.data_dir, 'stop', '*.jpg') paths_stop = glob.glob(match_stop) match_other = os.path.join(args.data_dir, 'other', '*.jpg') paths_other = glob.glob(match_other) match_test = os.path.join(args.data_dir, 'test', '*.jpg') paths_test = glob.glob(match_test) n_train = len(paths_left) + len(paths_right) + len(paths_stop) + len(paths_other) n_test = len(paths_test) # 初始化資料集矩陣 trainset = np.zeros( shape=(n_train, input_height, input_width, input_channel), dtype='float32', ) label = np.zeros( shape=(n_train, n_classes), dtype='float32', ) testset = np.zeros( shape=(n_test, input_height, input_width, input_channel), dtype='float32', ) # 讀取圖片到資料集 paths_train = paths_left + paths_right + paths_stop + paths_other for ind, path in enumerate(paths_train): image = cv2.imread(path) resized_image = cv2.resize(image, (input_width, input_height)) trainset[ind] = resized_image for ind, path in enumerate(paths_test): image = cv2.imread(path) resized_image = cv2.resize(image, (input_width, input_height)) testset[ind] = resized_image # 設定訓練集的標記 n_left = len(paths_left) n_right = len(paths_right) n_stop = len(paths_stop) n_other = len(paths_other) begin_ind = 0 end_ind = n_left label[begin_ind:end_ind, 0] = 1.0 begin_ind = n_left end_ind = n_left + n_right label[begin_ind:end_ind, 1] = 1.0 begin_ind = n_left + n_right end_ind = n_left + n_right + n_stop label[begin_ind:end_ind, 2] = 1.0 begin_ind = n_left + n_right + n_stop end_ind = n_left + n_right + n_stop + n_other label[begin_ind:end_ind, 3] = 1.0 # 正規化數值到 0~1 之間 trainset = trainset / 255.0 testset = testset / 255.0 # 載入模型參數 if args.load_weights: model.load_weights(args.weights_file) # 訓練模型 if args.epochs > 0: model.fit( trainset, label, epochs=args.epochs, validation_split=0.2, # batch_size=64, ) # 儲存模型架構及參數 model_desc = model.to_json() with open(args.model_file, 'w') as file_model: file_model.write(model_desc) model.save_weights(args.weights_file) # 執行預測 if testset.shape[0] != 0: result_onehot = model.predict(testset) result_sparse = np.argmax(result_onehot, axis=1) else: result_sparse = list() # 印出預測結果 if args.output_file == '-': print('檔名\t預測類別') for path, label_id in zip(paths_test, result_sparse): filename = os.path.basename(path) if label_id == 0: label_name = 'left' elif label_id == 1: label_name = 'right' elif label_id == 2: label_name = 'stop' elif label_id == 3: label_name = 'other' print('%s\t%s' % (filename, label_name)) else: with open(args.output_file, 'w') as file_out: file_out.write('檔名\t預測類別\n') for path, label_id in zip(paths_test, result_sparse): filename = os.path.basename(path) if label_id == 0: label_name = 'left' elif label_id == 1: label_name = 'right' elif label_id == 2: label_name = 'stop' elif label_id == 3: label_name = 'other' file_out.write('%s\t%s\n' % (filename, label_name)) if __name__ == '__main__': main() ```

如何解决tensorflow中的图片维度转换问题?

这是读取图片的代码。 def extract_data(): imgs=[] training_size, img_train_array,img_train_map_array= read_train_from_txt_file(train_txt_filename) for i in range(0,training_size): image_filename = img_train_array[i] if os.path.isfile(image_filename): print('Loading:'+ image_filename) img_file = cv.imread(image_filename) img_file=np.array(img_file) imgs.append(img_file) else: print('File' + image_filename + 'does not exist!') num_img = len(imgs) img_patches = [img_crop(imgs[i]) for i in range(num_img)] data = [img_patches[i][j] for i in range(len(img_patches)) for j in range(len(img_patches[i]))] return np.asarray(data) 这是调用的函数: train_data=extract_data() train_data_2=np.array(train_data) train_data_final=tf.reshape(train_data_2,[None,IMG_PATCH_SIZE,IMG_PATCH_SIZE,3]) train_label=extract_labels() train_label_2=np.array(train_label) train_label_final=tf.reshape(train_label_2,[None,NUM_LABEL]) 可是提示出现如下错误: TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [None, 16, 16, 3]. Consider casting elements to a supported type. 我已经转换为了asarray,为什么还是list类型啊? 新手求解啊,真的挺急的!!!!

mnist可视化时的FileNotFoundError错误

import keras import numpy as np from keras.datasets import mnist from keras.models import load_model from matplotlib import pyplot as plt from keras.models import Sequential,Model from keras.layers import Dense,Dropout,Flatten,Activation,Input from keras.layers import Conv2D,MaxPooling2D from vis.visualization import visualize_saliency from vis.utils import utils from keras import activations #加载数据及定义格式 batch_size = 128 num_classes = 10 epochs = 5 img_rows, img_cols = 28, 28 (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) #建立DNN模型 model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax', name='preds')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(), metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) #开始显著图的可视化(saliency visualization) #找出第一张手写体0的下标 class_idx=0 indices=np.where(y_test[:,class_idx]==1.)[0] idx=indices[0] #找出名字叫preds的layer,并返回它的下标 layer_idx=utils.find_layer_idx(model,'preds') #将找到对应下标的layer的activation从softmax变成linear model.layers[layer_idx].activation=activations.linear model = utils.apply_modifications(model) #求出x_test中属于某类的某个特定图像在某个layer的heatmap for modifier in ['guided','relu']: grads=visualize_saliency(model,layer_idx,filter_indices=class_idx,seed_input=x_test[idx],backprop_modifier=modifier) plt.figure() plt.title(modifier) #以'jet'colormap的方式可视化一张heatmap plt.imshow(grads, cmap='jet') 报错: 执行到model = utils.apply_modifications(model)时报错 错误:FileNotFoundError: [WinError 3] 系统找不到指定的路径。: '/tmp/cv86obbj.h5'

tensorflow 训练数据集时,报错InvalidArgumentError: Incompatible shapes: [15] vs. [15,6],标签的占位符与标签喂的数据格式不符,要怎么解决?

InvalidArgumentError (see above for traceback): Incompatible shapes: [15] vs. [15,6] 报错的详细信息如下所示: ``` INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.CancelledError'>, Enqueue operation was cancelled [[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input_producer, input_producer/RandomShuffle)]] Caused by op 'input_producer/input_producer_EnqueueMany', defined at: File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance app.start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 477, in start ioloop.IOLoop.instance().start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 888, in start handler_func(fd_obj, events) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events self._handle_recv() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell handler(stream, idents, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2802, in run_ast_nodes if self.run_code(code, result): File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-19-6fa659dba762>", line 320, in <module> batch_test(data_path, 100, 100, n_batch, train_op, loss, acc, range_num, val_batch) File "<ipython-input-19-6fa659dba762>", line 147, in batch_test tf_image,tf_label = read_records(record_file,resize_height,resize_width,type='normalization') File "<ipython-input-19-6fa659dba762>", line 84, in read_records filename_queue = tf.train.string_input_producer([filename]) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\training\input.py", line 232, in string_input_producer cancel_op=cancel_op) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\training\input.py", line 164, in input_producer enq = q.enqueue_many([input_tensor]) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\data_flow_ops.py", line 367, in enqueue_many self._queue_ref, vals, name=scope) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_data_flow_ops.py", line 1556, in _queue_enqueue_many_v2 name=name) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__ self._traceback = _extract_stack() CancelledError (see above for traceback): Enqueue operation was cancelled [[Node: input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_STRING], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](input_producer, input_producer/RandomShuffle)]] --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1038 try: -> 1039 return fn(*args) 1040 except errors.OpError as e: H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1020 feed_dict, fetch_list, target_list, -> 1021 status, run_metadata) 1022 H:\aa\Anaconda\anaconda\envs\tensorflow\lib\contextlib.py in __exit__(self, type, value, traceback) 87 try: ---> 88 next(self.gen) 89 except StopIteration: H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py in raise_exception_on_not_ok_status() 465 compat.as_text(pywrap_tensorflow.TF_Message(status)), --> 466 pywrap_tensorflow.TF_GetCode(status)) 467 finally: InvalidArgumentError: Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] During handling of the above exception, another exception occurred: InvalidArgumentError Traceback (most recent call last) <ipython-input-19-6fa659dba762> in <module>() 318 range_num = 5 319 --> 320 batch_test(data_path, 100, 100, n_batch, train_op, loss, acc, range_num, val_batch) 321 <ipython-input-19-6fa659dba762> in batch_test(record_file, resize_height, resize_width, n_batch, train_op, loss, acc, range_num, val_batch) 187 images_x = np.reshape(images, (-1, 30000)) 188 labels_y = np.reshape(labels, (-1, 6)) --> 189 _,err,ac = sess.run([train_op,loss,acc],feed_dict={x:images, y_:labels_y}) # 50% 神经元在工作中 190 train_loss = train_loss + err 191 train_acc = train_acc + ac H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in run(self, fetches, feed_dict, options, run_metadata) 776 try: 777 result = self._run(None, fetches, feed_dict, options_ptr, --> 778 run_metadata_ptr) 779 if run_metadata: 780 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 980 if final_fetches or final_targets: 981 results = self._do_run(handle, final_targets, final_fetches, --> 982 feed_dict_string, options, run_metadata) 983 else: 984 results = [] H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1030 if handle is None: 1031 return self._do_call(_run_fn, self._session, feed_dict, fetch_list, -> 1032 target_list, options, run_metadata) 1033 else: 1034 return self._do_call(_prun_fn, self._session, handle, feed_dict, H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args) 1050 except KeyError: 1051 pass -> 1052 raise type(e)(node_def, op, message) 1053 1054 def _extend_graph(self): InvalidArgumentError: Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] Caused by op 'Equal', defined at: File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance app.start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 477, in start ioloop.IOLoop.instance().start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start super(ZMQIOLoop, self).start() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 888, in start handler_func(fd_obj, events) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events self._handle_recv() File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv self._run_callback(callback, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback callback(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper return fn(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher return self.dispatch_shell(stream, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell handler(stream, idents, msg) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request user_expressions, allow_stdin) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell interactivity=interactivity, compiler=compiler, result=result) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2802, in run_ast_nodes if self.run_code(code, result): File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-19-6fa659dba762>", line 311, in <module> correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.float32), y_) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 672, in equal result = _op_def_lib.apply_op("Equal", x=x, y=y, name=name) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "H:\aa\Anaconda\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): Incompatible shapes: [15] vs. [15,6] [[Node: Equal = Equal[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](Cast_1, _recv_y__0/_21)]] [[Node: Mean/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_177_Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] ``` x,y- 占位符打印的信息如下: ``` x: Tensor("x-input:0", shape=(?, 100, 100, 3), dtype=float32) y_:Tensor("y_:0", shape=(?, 6), dtype=float32) ``` image 和 labels 的打印信息如下: ``` shape:(15, 100, 100, 3),tpye:float32,labels:[[ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 0. 0. 0. 0. 1. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 1.] [ 0. 0. 1. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 1. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 1. 0.] [ 0. 0. 0. 0. 1. 0.]]

报错:sys.argv[1] IndexError: list index out of range?

运行时报错:firstFolder = sys.argv[1] IndexError: list index out of range 怎么回事? ``` import numpy as np import cv2 import sys from matplotlib import pyplot as plt # img = cv2.imread('logo.png',0) # # Initiate ORB detector # orb = cv2.ORB_create() # # find the keypoints with ORB # kp = orb.detect(img,None) # # compute the descriptors with ORB # kp, des = orb.compute(img, kp) # # draw only keypoints location,not size and orientation # img2 = cv2.drawKeypoints(img, kp, None, color=(0,255,0), flags=0) # plt.imshow(img2), plt.show() from os import listdir from os.path import isfile, join class Application: def __init__(self, extractor, detector): self.extractor = extractor self.detector = detector def train_vocabulary(self, file_list, vocabulary_size): kmeans_trainer = cv2.BOWKMeansTrainer(vocabulary_size) for path_to_image in file_list: img = cv2.imread(path_to_image, 0) kp, des = self.detector.detectAndCompute(img, None) kmeans_trainer.add(des) return kmeans_trainer.cluster() def extract_features_from_image(self, file_name): image = cv2.imread(file_name) return self.extractor.compute(image, self.detector.detect(image)) def extract_train_data(self, file_list, category): train_data, train_responses = [], [] for path_to_file in file_list: train_data.extend(self.extract_features_from_image(path_to_file)) train_responses.append(category) return train_data, train_responses def train_classifier(self, data, responses): n_trees = 200 max_depth = 10 model = cv2.ml.RTrees_create() eps = 1 criteria = (cv2.TERM_CRITERIA_MAX_ITER, n_trees, eps) model.setTermCriteria(criteria) model.setMaxDepth(max_depth) model.train(np.array(data), cv2.ml.ROW_SAMPLE, np.array(responses)) return model def predict(self, file_name): features = self.extract_features_from_image(file_name) return self.classifier.predict(features)[0] def train(self, files_array, vocabulary_size=12): all_categories = [] for category in files_array: all_categories += category vocabulary = self.train_vocabulary(all_categories, vocabulary_size) self.extractor.setVocabulary(vocabulary) data = [] responses = [] for id in range(len(files_array)): data_temp, responses_temp = self.extract_train_data(files_array[id], id) data += data_temp responses += responses_temp self.classifier = self.train_classifier(data, responses) def error(self, file_list, category): responses = np.array([self.predict(file) for file in file_list]) _responses = np.array([category for _ in range(len(responses))]) return 1 - np.sum(responses == _responses) / len(responses) def get_images_from_folder(folder): return ["%s/%s" % (folder, f) for f in listdir(folder) if isfile(join(folder, f))] def start(folders, detector_type, voc_size, train_proportion): if detector_type == "SIFT": # "Scale Invariant Feature Transform" extract = cv2.xfeatures2d.SIFT_create() detector = cv2.xfeatures2d.SIFT_create() else: # "Speeded up Robust Features" extract = cv2.xfeatures2d.SURF_create() detector = cv2.xfeatures2d.SURF_create() flann_params = dict(algorithm=1, trees=5) matcher = cv2.FlannBasedMatcher(flann_params, {}) extractor = cv2.BOWImgDescriptorExtractor(extract, matcher) train = [] test = [] for folder in folders: images = get_images_from_folder(folder) np.random.shuffle(images) slice = int(len(images) * train_proportion) train_images = images[0:slice] test_images = images[slice:] train.append(train_images) test.append(test_images) app = Application(extractor, detector) app.train(train, voc_size) total_error = 0.0 for id in range(len(test)): print(app.error(train[id], id)) test_error = app.error(test[id], id) print(test_error) print("---------") total_error = total_error + test_error total_error = total_error / float(len(test)) print("Total error = %f" % total_error) firstFolder = sys.argv[1] secondFolder = sys.argv[2] detectorType = sys.argv[3] vocSize = int(sys.argv[4]) trainProportion = float(sys.argv[5]) start([firstFolder, secondFolder], detectorType, vocSize, trainProportion) ```

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

Intellij IDEA 实用插件安利

1. 前言从2020 年 JVM 生态报告解读 可以看出Intellij IDEA 目前已经稳坐 Java IDE 头把交椅。而且统计得出付费用户已经超过了八成(国外统计)。IDEA 的...

搜狗输入法也在挑战国人的智商!

故事总是一个接着一个到来...上周写完《鲁大师已经彻底沦为一款垃圾流氓软件!》这篇文章之后,鲁大师的市场工作人员就找到了我,希望把这篇文章删除掉。经过一番沟通我先把这篇文章从公号中删除了...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

魂迁光刻,梦绕芯片,中芯国际终获ASML大型光刻机

据羊城晚报报道,近日中芯国际从荷兰进口的一台大型光刻机,顺利通过深圳出口加工区场站两道闸口进入厂区,中芯国际发表公告称该光刻机并非此前盛传的EUV光刻机,主要用于企业复工复产后的生产线扩容。 我们知道EUV主要用于7nm及以下制程的芯片制造,光刻机作为集成电路制造中最关键的设备,对芯片制作工艺有着决定性的影响,被誉为“超精密制造技术皇冠上的明珠”,根据之前中芯国际的公报,目...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

百度工程师,获利10万,判刑3年!

所有一夜暴富的方法都写在刑法中,但总有人心存侥幸。这些年互联网犯罪高发,一些工程师高技术犯罪更是引发关注。这两天,一个百度运维工程师的案例传遍朋友圈。1...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《Oracle Java SE编程自学与面试指南》最佳学习路线图2020年最新版(进大厂必备)

正确选择比瞎努力更重要!

字节跳动面试官竟然问了我JDBC?

轻松等回家通知

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

实时更新:计算机编程语言排行榜—TIOBE世界编程语言排行榜(2020年6月份最新版)

内容导航: 1、TIOBE排行榜 2、总榜(2020年6月份) 3、本月前三名 3.1、C 3.2、Java 3.3、Python 4、学习路线图 5、参考地址 1、TIOBE排行榜 TIOBE排行榜是根据全世界互联网上有经验的程序员、课程和第三方厂商的数量,并使用搜索引擎(如Google、Bing、Yahoo!)以及Wikipedia、Amazon、YouTube统计出排名数据。

立即提问
相关内容推荐