mnist教程中使用自己的数据,load_data该如何定义?

在学习mnist时使用官方数据包,
换成自己的数据集,(x_train, y_train)=mnist.load_data()代码中的mnist该怎样替换?
直接删除mnist,提示load_data未定义,自己随机添加一个数据名例如“s”,则报错提示s未定义,请问该怎么修改?

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
自己制作的类似Fashion-MNIST数据集,怎么使用
在做想关项目,因为需要自己的数据集,因此我按照要求做了一个,如下 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574843320_334333.png) 用的是MXNet框架,jupyter notebook写 我自己的做法是把测试集和训练集用数组读取后包装 训练集如下,两个数组分别是图片像素和对应标签 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574843532_588917.png) 训练过程如下,为了能分别遍历又拆了train_iter为train_iter[0]、[1] ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574843838_858564.png) 接着在导入训练模型(第12行)时候出现问题,报错如下 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574844074_977535.png) 搞了几天不明白这个数据类型,是导入数据集的方式错了还是、、、、 下面这个是载入Fashion-MNIST数据集的函数 没看明白,现在还在尝试,但是有大佬指导下就更好了(求~) ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574844395_318556.png) 附代码 ``` def load_data_fashion_mnist(batch_size, resize=None, root=os.path.join( '~', '.mxnet', 'datasets', 'fashion-mnist')): root = os.path.expanduser(root) # 展开用户路径'~' transformer = [] if resize: transformer += [gdata.vision.transforms.Resize(resize)] transformer += [gdata.vision.transforms.ToTensor()] transformer = gdata.vision.transforms.Compose(transformer) mnist_train = gdata.vision.FashionMNIST(root=root, train=True) mnist_test = gdata.vision.FashionMNIST(root=root, train=False) num_workers = 0 if sys.platform.startswith('win32') else 4 train_iter = gdata.DataLoader( mnist_train.transform_first(transformer), batch_size, shuffle=True, num_workers=num_workers) test_iter = gdata.DataLoader( mnist_test.transform_first(transformer), batch_size, shuffle=False, num_workers=num_workers) return train_iter, test_iter batch_size = 128 # 如出现“out of memory”的报错信息,可减小batch_size或resize train_iter, test_iter = load_data_fashion_mnist(batch_size, resize=224) ```
tensorflow在第一次运行Fashion MNIST会下载数据集,应该网络不好中断了报错不知咋办?
```**tensorflow在第一次运行Fashion MNIST会下载数据集,应该网络不好中断了报错不知咋办?** 代码如下: !/usr/bin/python _*_ coding: utf-8 -*- from __future__ import print_function import tensorflow as tf import matplotlib as mpl import matplotlib.pyplot as plt %matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time from tensorflow import keras print (tf.__version__) print (sys.version_info) for module in mpl ,np, pd, sklearn, tf, keras: print (module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data() x_valid,x_train = x_train_all[:5000],x_train_all[5000:] y_valid,y_train = y_train_all[:5000],y_train_all[5000:] print (x_valid.shape, y_valid.shape) print (x_train.shape, y_train.shape) print (x_test.shape, y_test.shape) ``` ``` 报错如下: 2.1.0 sys.version_info(major=2, minor=7, micro=12, releaselevel='final', serial=0) matplotlib 2.2.5 numpy 1.16.6 pandas 0.24.2 sklearn 0.20.4 tensorflow 2.1.0 tensorflow_core.python.keras.api._v2.keras 2.2.4-tf Traceback (most recent call last): File "/home/join/test_demo/test2.py", line 26, in <module> (x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data() File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/keras/data sets/fashion_mnist.py", line 59, in load_data imgpath.read(), np.uint8, offset=16).reshape(len(y_train), 28, 28) File "/usr/lib/python2.7/gzip.py", line 261, in read self._read(readsize) File "/usr/lib/python2.7/gzip.py", line 315, in _read self._read_eof() File "/usr/lib/python2.7/gzip.py", line 354, in _read_eof hex(self.crc))) IOError: CRC check failed 0xa445bb78 != 0xe7f80d 3fL ``` ```
pycharm运行mnist_show.py出现如下问题,
# 这是深度学习入门这本书里的一段代码,请问这个问题是什么意思以及怎样解决? 报错如下:(下面有源代码)Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)] on win32 runfile('E:/PycharmProjects/deep-learning-from-scratch-master/ch03/mnist_show.py', wdir='E:/PycharmProjects/deep-learning-from-scratch-master/ch03') Converting train-images-idx3-ubyte.gz to NumPy Array ... Traceback (most recent call last): File "D:\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-eab209ee1d7f>", line 1, in <module> runfile('E:/PycharmProjects/deep-learning-from-scratch-master/ch03/mnist_show.py', wdir='E:/PycharmProjects/deep-learning-from-scratch-master/ch03') File "D:\Program Files\JetBrains\PyCharm 2019.1.1\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "D:\Program Files\JetBrains\PyCharm 2019.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "E:/PycharmProjects/deep-learning-from-scratch-master/ch03/mnist_show.py", line 13, in <module> (x_train, t_train), (x_test, t_test) = load_mnist(flatten=True, normalize=False) File "E:\PycharmProjects\deep-learning-from-scratch-master\dataset\mnist.py", line 106, in load_mnist init_mnist() File "E:\PycharmProjects\deep-learning-from-scratch-master\dataset\mnist.py", line 76, in init_mnist dataset = _convert_numpy() 源代码为:# coding: utf-8 mnist_show.py:::: import sys, os sys.path.append(os.pardir) # 为了导入父目录的文件而进行的设定 import numpy as np from dataset.mnist import load_mnist from PIL import Image def img_show(img): pil_img = Image.fromarray(np.uint8(img)) pil_img.show() (x_train, t_train), (x_test, t_test) = load_mnist(flatten=True, normalize=False) img = x_train[0] label = t_train[0] print(label) # 5 print(img.shape) # (784,) img = img.reshape(28, 28) # 把图像的形状变为原来的尺寸 print(img.shape) # (28, 28) img_show(img) mnist.py::: # coding: utf-8 try: import urllib.request except ImportError: raise ImportError('You should use Python 3.x') import os.path import gzip import pickle import os import numpy as np url_base = 'http://yann.lecun.com/exdb/mnist/' key_file = { 'train_img':'train-images-idx3-ubyte.gz', 'train_label':'train-labels-idx1-ubyte.gz', 'test_img':'t10k-images-idx3-ubyte.gz', 'test_label':'t10k-labels-idx1-ubyte.gz' } dataset_dir = os.path.dirname(os.path.abspath(__file__)) save_file = dataset_dir + "/mnist.pkl" train_num = 60000 test_num = 10000 img_dim = (1, 28, 28) img_size = 784 def _download(file_name): file_path = dataset_dir + "/" + file_name if os.path.exists(file_path): return print("Downloading " + file_name + " ... ") urllib.request.urlretrieve(url_base + file_name, file_path) print("Done") def download_mnist(): for v in key_file.values(): _download(v) def _load_label(file_name): file_path = dataset_dir + "/" + file_name print("Converting " + file_name + " to NumPy Array ...") with gzip.open(file_path, 'rb') as f: labels = np.frombuffer(f.read(), np.uint8, offset=8) print("Done") return labels def _load_img(file_name): file_path = dataset_dir + "/" + file_name print("Converting " + file_name + " to NumPy Array ...") with gzip.open(file_path, 'rb') as f: data = np.frombuffer(f.read(), np.uint8, offset=16) data = data.reshape(-1, img_size) print("Done") return data def _convert_numpy(): dataset = {} dataset['train_img'] = _load_img(key_file['train_img']) dataset['train_label'] = _load_label(key_file['train_label']) dataset['test_img'] = _load_img(key_file['test_img']) dataset['test_label'] = _load_label(key_file['test_label']) return dataset def init_mnist(): download_mnist() dataset = _convert_numpy() print("Creating pickle file ...") with open(save_file, 'wb') as f: pickle.dump(dataset, f, -1) print("Done!") def _change_one_hot_label(X): T = np.zeros((X.size, 10)) for idx, row in enumerate(T): row[X[idx]] = 1 return T def load_mnist(normalize=True, flatten=True, one_hot_label=False): """读入MNIST数据集 Parameters ---------- normalize : 将图像的像素值正规化为0.0~1.0 one_hot_label : one_hot_label为True的情况下,标签作为one-hot数组返回 one-hot数组是指[0,0,1,0,0,0,0,0,0,0]这样的数组 flatten : 是否将图像展开为一维数组 Returns ------- (训练图像, 训练标签), (测试图像, 测试标签) """ if not os.path.exists(save_file): init_mnist() with open(save_file, 'rb') as f: dataset = pickle.load(f) if normalize: for key in ('train_img', 'test_img'): dataset[key] = dataset[key].astype(np.float32) dataset[key] /= 255.0 if one_hot_label: dataset['train_label'] = _change_one_hot_label(dataset['train_label']) dataset['test_label'] = _change_one_hot_label(dataset['test_label']) if not flatten: for key in ('train_img', 'test_img'): dataset[key] = dataset[key].reshape(-1, 1, 28, 28) return (dataset['train_img'], dataset['train_label']), (dataset['test_img'], dataset['test_label']) if __name__ == '__main__': init_mnist()
mnist可视化时的FileNotFoundError错误
import keras import numpy as np from keras.datasets import mnist from keras.models import load_model from matplotlib import pyplot as plt from keras.models import Sequential,Model from keras.layers import Dense,Dropout,Flatten,Activation,Input from keras.layers import Conv2D,MaxPooling2D from vis.visualization import visualize_saliency from vis.utils import utils from keras import activations #加载数据及定义格式 batch_size = 128 num_classes = 10 epochs = 5 img_rows, img_cols = 28, 28 (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) #建立DNN模型 model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax', name='preds')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(), metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) #开始显著图的可视化(saliency visualization) #找出第一张手写体0的下标 class_idx=0 indices=np.where(y_test[:,class_idx]==1.)[0] idx=indices[0] #找出名字叫preds的layer,并返回它的下标 layer_idx=utils.find_layer_idx(model,'preds') #将找到对应下标的layer的activation从softmax变成linear model.layers[layer_idx].activation=activations.linear model = utils.apply_modifications(model) #求出x_test中属于某类的某个特定图像在某个layer的heatmap for modifier in ['guided','relu']: grads=visualize_saliency(model,layer_idx,filter_indices=class_idx,seed_input=x_test[idx],backprop_modifier=modifier) plt.figure() plt.title(modifier) #以'jet'colormap的方式可视化一张heatmap plt.imshow(grads, cmap='jet') 报错: 执行到model = utils.apply_modifications(model)时报错 错误:FileNotFoundError: [WinError 3] 系统找不到指定的路径。: '/tmp/cv86obbj.h5'
在Spyder界面中使用tensorflow进行fashion_mnist数据集学习,结果loss为非数,并且准确率一直未变
1.建立了一个3个全连接层的神经网络; 2.代码如下: ``` import matplotlib as mpl import matplotlib.pyplot as plt #%matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time import tensorflow as tf from tensorflow import keras print(tf.__version__) print(sys.version_info) for module in mpl, np, sklearn, tf, keras: print(module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data() x_valid, x_train = x_train_all[:5000], x_train_all[5000:] y_valid, y_train = y_train_all[:5000], y_train_all[5000:] #tf.keras.models.Sequential model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape= [28,28])) model.add(keras.layers.Dense(300, activation="relu")) model.add(keras.layers.Dense(100, activation="relu")) model.add(keras.layers.Dense(10,activation="softmax")) ###sparse为最后输出为index类型,如果为one hot类型,则不需加sparse model.compile(loss = "sparse_categorical_crossentropy",optimizer = "sgd", metrics = ["accuracy"]) #model.layers #model.summary() history = model.fit(x_train, y_train, epochs=10, validation_data=(x_valid,y_valid)) ``` 3.输出结果: ``` runfile('F:/new/new world/deep learning/tensorflow/ex2/tf_keras_classification_model.py', wdir='F:/new/new world/deep learning/tensorflow/ex2') 2.0.0 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0) matplotlib 3.1.1 numpy 1.16.5 sklearn 0.21.3 tensorflow 2.0.0 tensorflow_core.keras 2.2.4-tf Train on 55000 samples, validate on 5000 samples Epoch 1/10 WARNING:tensorflow:Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: 55000/55000 [==============================] - 3s 58us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 2/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 3/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 4/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 5/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 6/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 7/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 8/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 9/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 10/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 ```
Keras 图片要如何输入?
用Keras做CNN,请问图片要怎么输入进去。有没有mnist.load_data()的源码
tensorflow中datasets.map()报错
如下图所示,其功能是将代码封装进dataset并转化我可迭代格式,但是在执行预处理map()函数的时候报错: ``` ValueError: Tensor conversion requested dtype float32 for Tensor with dtype uint8: 'Tensor("arg0:0", shape=(28, 28), dtype=uint8)' ``` 然而在删除map后正常运行,说明不是转换格式的问题,求问各位大神这是为什么呢? 报错代码: ``` (x,y),(x_val,y_val)=datasets.mnist.load_data() def trans(x,y): x=tf.convert_to_tensor(x,dtype=tf.float32) y=tf.convert_to_tensor(y,dtype=tf.int32) y=tf.one_hot(y,depth=10) return x,y train_db=tf.data.Dataset.from_tensor_slices((x,y)) train_db.map(trans) train_db.shuffle(10000).batch(32) ``` 正常运行: ``` (x,y),(x_val,y_val)=datasets.mnist.load_data() x=tf.convert_to_tensor(x,dtype=tf.float32) y=tf.convert_to_tensor(y,dtype=tf.int32) y=tf.one_hot(y,depth=10) train_db=tf.data.Dataset.from_tensor_slices((x,y)) train_db.shuffle(10000).batch(32) ```
numpy reshape函数使用报错
我在学习使用tensorflow的时候遇到了一个报错,查了很久也没能解决问题 ``` import tensorflow as tf mnist = tf.keras.datasets.mnist (x_train,y_train),(x_test,y_test) = mnist.load_data() print(x_train.shape,y_train.shape) print(x_test.shape,y_test.shape) import matplotlib.pyplot as plt import numpy as np np.pad(x_train,((0,0),(2,2),(2,2)),'constant',constant_values=0) x_train = x_train.astype('float32') x_train /= 255 x_train = x_train.reshape(x_train.shape[0],32,32,1) ``` 获得报错 ValueError: cannot reshape array of size 47040000 into shape (60000,32,32,1)。 我是win10系统,下载的数据保存成mnist.npz,不知道为什么报错了,请各位大佬指点一下,谢谢!
fashion_mnist识别准确率问题
fashion_mnist识别准确率一般为多少呢?我看好多人都是92%左右,但是我用一个网络达到了94%,想问问做过的小伙伴到底是多少? ``` #这是我的结果示意 x_shape: (60000, 28, 28) y_shape: (60000,) epoches: 0 val_acc: 0.4991 train_acc 0.50481665 epoches: 1 val_acc: 0.6765 train_acc 0.66735 epoches: 2 val_acc: 0.755 train_acc 0.7474 epoches: 3 val_acc: 0.7846 train_acc 0.77915 epoches: 4 val_acc: 0.798 train_acc 0.7936 epoches: 5 val_acc: 0.8082 train_acc 0.80365 epoches: 6 val_acc: 0.8146 train_acc 0.8107 epoches: 7 val_acc: 0.8872 train_acc 0.8872333 epoches: 8 val_acc: 0.896 train_acc 0.89348334 epoches: 9 val_acc: 0.9007 train_acc 0.8986 epoches: 10 val_acc: 0.9055 train_acc 0.90243334 epoches: 11 val_acc: 0.909 train_acc 0.9058833 epoches: 12 val_acc: 0.9112 train_acc 0.90868336 epoches: 13 val_acc: 0.9126 train_acc 0.91108334 epoches: 14 val_acc: 0.9151 train_acc 0.9139 epoches: 15 val_acc: 0.9172 train_acc 0.91595 epoches: 16 val_acc: 0.9191 train_acc 0.91798335 epoches: 17 val_acc: 0.9204 train_acc 0.91975 epoches: 18 val_acc: 0.9217 train_acc 0.9220333 epoches: 19 val_acc: 0.9252 train_acc 0.9234667 epoches: 20 val_acc: 0.9259 train_acc 0.92515 epoches: 21 val_acc: 0.9281 train_acc 0.9266667 epoches: 22 val_acc: 0.9289 train_acc 0.92826664 epoches: 23 val_acc: 0.9301 train_acc 0.93005 epoches: 24 val_acc: 0.9315 train_acc 0.93126667 epoches: 25 val_acc: 0.9322 train_acc 0.9328 epoches: 26 val_acc: 0.9331 train_acc 0.9339667 epoches: 27 val_acc: 0.9342 train_acc 0.93523335 epoches: 28 val_acc: 0.9353 train_acc 0.93665 epoches: 29 val_acc: 0.9365 train_acc 0.9379333 epoches: 30 val_acc: 0.9369 train_acc 0.93885 epoches: 31 val_acc: 0.9387 train_acc 0.9399 epoches: 32 val_acc: 0.9395 train_acc 0.9409 epoches: 33 val_acc: 0.94 train_acc 0.9417667 epoches: 34 val_acc: 0.9403 train_acc 0.94271666 epoches: 35 val_acc: 0.9409 train_acc 0.9435167 epoches: 36 val_acc: 0.9418 train_acc 0.94443333 epoches: 37 val_acc: 0.942 train_acc 0.94515 epoches: 38 val_acc: 0.9432 train_acc 0.9460667 epoches: 39 val_acc: 0.9443 train_acc 0.9468833 epoches: 40 val_acc: 0.9445 train_acc 0.94741666 epoches: 41 val_acc: 0.9462 train_acc 0.9482 epoches: 42 val_acc: 0.947 train_acc 0.94893336 epoches: 43 val_acc: 0.9472 train_acc 0.94946665 epoches: 44 val_acc: 0.948 train_acc 0.95028335 epoches: 45 val_acc: 0.9486 train_acc 0.95095 epoches: 46 val_acc: 0.9488 train_acc 0.9515833 epoches: 47 val_acc: 0.9492 train_acc 0.95213336 epoches: 48 val_acc: 0.9495 train_acc 0.9529833 epoches: 49 val_acc: 0.9498 train_acc 0.9537 val_acc: 0.9498 ``` ``` import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt def to_onehot(y,num): lables = np.zeros([num,len(y)]) for i in range(len(y)): lables[y[i],i] = 1 return lables.T # 预处理数据 mnist = keras.datasets.fashion_mnist (train_images,train_lables),(test_images,test_lables) = mnist.load_data() print('x_shape:',train_images.shape) #(60000) print('y_shape:',train_lables.shape) X_train = train_images.reshape((-1,train_images.shape[1]*train_images.shape[1])) / 255.0 #X_train = tf.reshape(X_train,[-1,X_train.shape[1]*X_train.shape[2]]) Y_train = to_onehot(train_lables,10) X_test = test_images.reshape((-1,test_images.shape[1]*test_images.shape[1])) / 255.0 Y_test = to_onehot(test_lables,10) #双隐层的神经网络 input_nodes = 784 output_nodes = 10 layer1_nodes = 100 layer2_nodes = 50 batch_size = 100 learning_rate_base = 0.8 learning_rate_decay = 0.99 regularization_rate = 0.0000001 epochs = 50 mad = 0.99 learning_rate = 0.005 # def inference(input_tensor,avg_class,w1,b1,w2,b2): # if avg_class == None: # layer1 = tf.nn.relu(tf.matmul(input_tensor,w1)+b1) # return tf.nn.softmax(tf.matmul(layer1,w2) + b2) # else: # layer1 = tf.nn.relu(tf.matmul(input_tensor,avg_class.average(w1)) + avg_class.average(b1)) # return tf.matual(layer1,avg_class.average(w2)) + avg_class.average(b2) def train(mnist): X = tf.placeholder(tf.float32,[None,input_nodes],name = "input_x") Y = tf.placeholder(tf.float32,[None,output_nodes],name = "y_true") w1 = tf.Variable(tf.truncated_normal([input_nodes,layer1_nodes],stddev=0.1)) b1 = tf.Variable(tf.constant(0.1,shape=[layer1_nodes])) w2 = tf.Variable(tf.truncated_normal([layer1_nodes,layer2_nodes],stddev=0.1)) b2 = tf.Variable(tf.constant(0.1,shape=[layer2_nodes])) w3 = tf.Variable(tf.truncated_normal([layer2_nodes,output_nodes],stddev=0.1)) b3 = tf.Variable(tf.constant(0.1,shape=[output_nodes])) layer1 = tf.nn.relu(tf.matmul(X,w1)+b1) A2 = tf.nn.relu(tf.matmul(layer1,w2)+b2) A3 = tf.nn.relu(tf.matmul(A2,w3)+b3) y_hat = tf.nn.softmax(A3) # y_hat = inference(X,None,w1,b1,w2,b2) # global_step = tf.Variable(0,trainable=False) # variable_averages = tf.train.ExponentialMovingAverage(mad,global_step) # varible_average_op = variable_averages.apply(tf.trainable_variables()) #y = inference(x,variable_averages,w1,b1,w2,b2) cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=A3,labels=Y)) regularizer = tf.contrib.layers.l2_regularizer(regularization_rate) regularization = regularizer(w1) + regularizer(w2) +regularizer(w3) loss = cross_entropy + regularization * regularization_rate # learning_rate = tf.train.exponential_decay(learning_rate_base,global_step,epchos,learning_rate_decay) # train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step) train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # with tf.control_dependencies([train_step,varible_average_op]): # train_op = tf.no_op(name="train") correct_prediction = tf.equal(tf.argmax(y_hat,1),tf.argmax(Y,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) total_loss = [] val_acc = [] total_train_acc = [] x_Xsis = [] with tf.Session() as sess: tf.global_variables_initializer().run() for i in range(epochs): # x,y = next_batch(X_train,Y_train,batch_size) batchs = int(X_train.shape[0] / batch_size + 1) loss_e = 0. for j in range(batchs): batch_x = X_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:] batch_y = Y_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:] sess.run(train_step,feed_dict={X:batch_x,Y:batch_y}) loss_e += sess.run(loss,feed_dict={X:batch_x,Y:batch_y}) # train_step.run(feed_dict={X:x,Y:y}) validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test}) train_acc = sess.run(accuracy,feed_dict={X:X_train,Y:Y_train}) print("epoches: ",i,"val_acc: ",validate_acc,"train_acc",train_acc) total_loss.append(loss_e / batch_size) val_acc.append(validate_acc) total_train_acc.append(train_acc) x_Xsis.append(i) validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test}) print("val_acc: ",validate_acc) return (x_Xsis,total_loss,total_train_acc,val_acc) result = train((X_train,Y_train,X_test,Y_test)) def plot_acc(total_train_acc,val_acc,x): plt.figure() plt.plot(x,total_train_acc,'--',color = "red",label="train_acc") plt.plot(x,val_acc,color="green",label="val_acc") plt.xlabel("Epoches") plt.ylabel("acc") plt.legend() plt.show() ```
求助:torchvision框架处理cifar10数据集出错
1.在运行ganomaly模型的训练文件train.py时出错,按照报错信息应该是在对cifar10数据集进行处理时出错。具体报错截图如下: ![图片说明](https://img-ask.csdn.net/upload/201906/12/1560344374_556226.png) \n 2.报错信息中的data.py程序具体如下: ``` """ LOAD DATA from file. """ # pylint: disable=C0301,E1101,W0622,C0103,R0902,R0915 ## import os import torch import numpy as np import torchvision.datasets as datasets from torchvision.datasets import MNIST from torchvision.datasets import CIFAR10 from torchvision.datasets import ImageFolder import torchvision.transforms as transforms ## def load_data(opt): """ Load Data Args: opt ([type]): Argument Parser Raises: IOError: Cannot Load Dataset Returns: [type]: dataloader """ ······ dataset['train'].train_data, dataset['train'].train_labels, \ dataset['test'].test_data, dataset['test'].test_labels = get_cifar_anomaly_dataset( trn_img=dataset['train'].train_data, trn_lbl=dataset['train'].train_labels, tst_img=dataset['test'].test_data, tst_lbl=dataset['test'].test_labels, abn_cls_idx=classes[opt.anomaly_class] ) ······ ## def get_cifar_anomaly_dataset(trn_img, trn_lbl, tst_img, tst_lbl, abn_cls_idx=0, manualseed=-1): 3.是否是因为在调用get_cifar_anomaly_dataset()函数时传入的四个参数在之前没有定义? ```
openCV_python自带的ANN进行手写字体识别,报错。求助
![图片说明](https://img-ask.csdn.net/upload/202001/31/1580479207_695592.png)![图片说明](https://img-ask.csdn.net/upload/202001/31/1580479217_497206.png) 我用python3.6按照《OpenCV3计算机视觉》书上代码进行手写字识别,识别率很低,运行时还报了错:OpenCV(3.4.1) Error: Assertion failed ((type == 5 || type == 6) && inputs.cols == layer_sizes[0]) in cv::ml::ANN_MLPImpl::predict, file C:\projects\opencv-python\opencv\modules\ml\src\ann_mlp.cpp, line 411 ``` 具体代码如下:求大佬指点下 import cv2 import numpy as np import digits_ann as ANN def inside(r1, r2): x1, y1, w1, h1 = r1 x2, y2, w2, h2 = r2 if (x1 > x2) and (y1 > y2) and (x1 + w1 < x2 + w2) and (y1 + h1 < y2 + h2): return True else: return False def wrap_digit(rect): x, y, w, h = rect padding = 5 hcenter = x + w / 2 vcenter = y + h / 2 if (h > w): w = h x = hcenter - (w / 2) else: h = w y = vcenter - (h / 2) return (int(x - padding), int(y - padding), int(w + padding), int(h + padding)) ''' 注意:首次测试时,建议将使用完整的训练数据集,且进行多次迭代,直到收敛 如:ann, test_data = ANN.train(ANN.create_ANN(100), 50000, 30) ''' ann, test_data = ANN.train(ANN.create_ANN(10), 50000, 1) # 调用所需识别的图片,并处理 path = "C:\\Users\\64601\\PycharmProjects\Ann\\images\\numbers.jpg" img = cv2.imread(path, cv2.IMREAD_UNCHANGED) bw = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) bw = cv2.GaussianBlur(bw, (7, 7), 0) ret, thbw = cv2.threshold(bw, 127, 255, cv2.THRESH_BINARY_INV) thbw = cv2.erode(thbw, np.ones((2, 2), np.uint8), iterations=2) image, cntrs, hier = cv2.findContours(thbw.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) rectangles = [] for c in cntrs: r = x, y, w, h = cv2.boundingRect(c) a = cv2.contourArea(c) b = (img.shape[0] - 3) * (img.shape[1] - 3) is_inside = False for q in rectangles: if inside(r, q): is_inside = True break if not is_inside: if not a == b: rectangles.append(r) for r in rectangles: x, y, w, h = wrap_digit(r) cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2) roi = thbw[y:y + h, x:x + w] try: digit_class = ANN.predict(ann, roi)[0] except: print("except") continue cv2.putText(img, "%d" % digit_class, (x, y - 1), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0)) cv2.imshow("thbw", thbw) cv2.imshow("contours", img) cv2.waitKey() cv2.destroyAllWindows() ####### import cv2 import pickle import numpy as np import gzip """OpenCV ANN Handwritten digit recognition example Wraps OpenCV's own ANN by automating the loading of data and supplying default paramters, such as 20 hidden layers, 10000 samples and 1 training epoch. The load data code is taken from http://neuralnetworksanddeeplearning.com/chap1.html by Michael Nielsen """ def vectorized_result(j): e = np.zeros((10, 1)) e[j] = 1.0 return e def load_data(): with gzip.open('C:\\Users\\64601\\PycharmProjects\\Ann\\mnist.pkl.gz') as fp: # 注意版本不同,需要添加传入第二个参数encoding='bytes',否则出现编码错误 training_data, valid_data, test_data = pickle.load(fp, encoding='bytes') fp.close() return (training_data, valid_data, test_data) def wrap_data(): # tr_d数组长度为50000,va_d数组长度为10000,te_d数组长度为10000 tr_d, va_d, te_d = load_data() # 训练数据集 training_inputs = [np.reshape(x, (784, 1)) for x in tr_d[0]] training_results = [vectorized_result(y) for y in tr_d[1]] training_data = list(zip(training_inputs, training_results)) # 校验数据集 validation_inputs = [np.reshape(x, (784, 1)) for x in va_d[0]] validation_data = list(zip(validation_inputs, va_d[1])) # 测试数据集 test_inputs = [np.reshape(x, (784, 1)) for x in te_d[0]] test_data = list(zip(test_inputs, te_d[1])) return (training_data, validation_data, test_data) def create_ANN(hidden=20): ann = cv2.ml.ANN_MLP_create() # 建立模型 ann.setTrainMethod(cv2.ml.ANN_MLP_RPROP | cv2.ml.ANN_MLP_UPDATE_WEIGHTS) # 设置训练方式为反向传播 ann.setActivationFunction( cv2.ml.ANN_MLP_SIGMOID_SYM) # 设置激活函数为SIGMOID,还有cv2.ml.ANN_MLP_IDENTITY,cv2.ml.ANNMLP_GAUSSIAN ann.setLayerSizes(np.array([784, hidden, 10])) # 设置层数,输入784层,输出层10 ann.setTermCriteria((cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 100, 0.1)) # 设置终止条件 return ann def train(ann, samples=10000, epochs=1): # tr:训练数据集; val:校验数据集; test:测试数据集; tr, val, test = wrap_data() for x in range(epochs): counter = 0 for img in tr: if (counter > samples): break if (counter % 1000 == 0): print("Epoch %d: Trained %d/%d" % (x, counter, samples)) counter += 1 data, digit = img ann.train(np.array([data.ravel()], dtype=np.float32), cv2.ml.ROW_SAMPLE, np.array([digit.ravel()], dtype=np.float32)) print("Epoch %d complete" % x) return ann, test def predict(ann, sample): resized = sample.copy() rows, cols = resized.shape if rows != 28 and cols != 28 and rows * cols > 0: resized = cv2.resize(resized, (28, 28), interpolation=cv2.INTER_CUBIC) return ann.predict(np.array([resized.ravel()], dtype=np.float32)) ```
mnist,自己手写一个数字总是验证不正确
jupyter上运行结果如下 ## 导入数据 ``` (x_train,y_train), (x_test,y_test) = mnist.load_data() x_train = x_train.reshape(x_train.shape[0],28,28,1).astype('float32') x_test = x_test.reshape(x_test.shape[0],28,28,1).astype('float32') x_train /= 255 x_test /= 255 y_train = np_utils.to_categorical(y_train,10) y_test = np_utils.to_categorical(y_test,10) ``` ## 训练网络 ``` model = Sequential() model.add(Conv2D(filters = 64, kernel_size = (3,3), activation = 'relu', input_shape = (28,28,1))) model.add(MaxPooling2D(pool_size = (2,2))) model.add(Conv2D(filters = 64, kernel_size = (3,3), activation = 'relu')) model.add(MaxPooling2D(pool_size = (2,2))) model.add(Dropout(0.5)) model.add(Flatten()) model.add(Dense(128, activation = 'relu')) model.add(Dense(10,activation = 'softmax')) model.compile(loss = 'categorical_crossentropy', optimizer = Adadelta(), metrics = ['accuracy']) model.fit(x_train,y_train,batch_size=100,epochs=20) ``` ## 训练结果 ![图片说明](https://img-ask.csdn.net/upload/201712/18/1513607157_995645.png) ## 用自己手写的数字验证结果总是同一个数字 ``` # 将图片转为灰度图并调整为28*28大小 def convert_gray(f, **args): rgb=io.imread(f) gray=color.rgb2gray(rgb) dst=transform.resize(gray,(28,28)) return dst ``` test_gray_resize = convert_gray('number3.png') ![图片说明](https://img-ask.csdn.net/upload/201712/18/1513607490_324155.png) ![图片说明](https://img-ask.csdn.net/upload/201712/18/1513607537_288275.png) 运行结果如上,我手写的数字3,预测结果是6,后面不管我用哪个自己手写的图片验证,预测结果都是6,刚开始做深度学习,出现bug真是愁好几天,大神求解惑
如何把AR人脸数据集,导入CNN test_example_CNN.m
如何把AR人脸数据集,导入CNN test_example_CNN.m 下面的是DeepLearnToolbox-master中的例子。 function test_example_CNN load mnist_uint8; train_x = double(reshape(train_x',28,28,60000))/255; test_x = double(reshape(test_x',28,28,10000))/255; train_y = double(train_y'); test_y = double(test_y'); 即为了使用AR人脸数据集,如何修改上面的代码? 【 Data_AR.mat】如下: Name Size Bytes Class Attributes imgCol 1x1 8 double imgRow 1x1 8 double samples 2600x1024 2662400 uint8 samplesnum 2600x1 20800 double
求mnist多数字识别,修改完成我的代码
实现多个 数字的识别 如 ![图片说明](https://img-ask.csdn.net/upload/201906/10/1560141462_156564.png) 实现方法: 把五个数字的图 拼成一个 再进行 训练 和 测试 学校讲的自己看的都一知半解,我都不知道 我一个大二的,没什么基础的学生是怎么选上做这个研究的。。马上due就快到了,求大神在我代码基础上帮我完成。。 ``` ``` import tensorflow as tf import numpy as np from numpy import array import os import cv2 import matplotlib.pyplot as plt mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() # x_train = tf.keras.utils.normalize(x_train, axis=1) # x_test = tf.keras.utils.normalize(x_test, axis=1) import random countain = 0 myxtrain = [] myytrain = [] while countain < 5: a_1 = random.randint(0, 10000) a_2 = random.randint(0, 10000) a_3 = random.randint(0, 10000) a_4 = random.randint(0, 10000) a_5 = random.randint(0, 10000) a = np.concatenate((x_train[a_1], x_train[a_2], x_train[a_3], x_train[a_4], x_train[a_5]), axis=1) myxtrain.append(a) labelx = [] s1 = str(y_train[a_1]) s2 = str(y_train[a_2]) s3 = str(y_train[a_3]) s4 = str(y_train[a_4]) s5 = str(y_train[a_5]) labelx.append(s1) labelx.append(s2) labelx.append(s3) labelx.append(s4) labelx.append(s5) s = ' '.join(labelx) print('----', s) # cv2.imwrite(os.path.join(path1, s + '.jpg'), a) # cv2.waitKey(0) myytrain.append(s) countain += 1 x_train = array(myxtrain) y_train = array(myytrain) from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.optimizers import SGD model = tf.keras.models.Sequential() model.add(Conv2D(5, (3, 3), activation='relu', input_shape=(28, 140, 2))) model.add(Conv2D(5, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dense(10, activation='softmax')) sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='categorical_crossentropy', optimizer=sgd) model.fit(x_train, y_train, batch_size=1, epochs=10) ``` ```
做keras的可视化时utils.apply_modifications出错
**#(1)用mnist文件生成了model.h5文件:** import numpy as np import keras from keras.datasets import mnist from keras.models import Sequential,Model from keras.layers import Dense,Dropout,Flatten,Activation,Input from keras.layers import Conv2D,MaxPooling2D from keras import backend as K batch_size=128 num_classes=10 epochs=5 #定义图像的长宽 img_rows,img_cols=28,28 #加载mnist数据集 (x_train,y_train),(x_test,y_test)=mnist.load_data() #定义图像的格式 x_train=x_train.reshape(x_train.shape[0],img_rows,img_cols,1) x_test=x_test.reshape(x_test.shape[0],img_rows,img_cols,1) input_shape=(img_rows,img_cols,1) x_train=x_train.astype('float32') x_test=x_test.astype('float32') x_train/=255 x_test/=255 print('x_train shape:',x_train.shape) print(x_train.shape[0],'train samples') print(x_test.shape[0],'test samples') y_train=keras.utils.to_categorical(y_train,num_classes) y_test=keras.utils.to_categorical(y_test,num_classes) #开始DNN网络 model=Sequential() model.add(Conv2D(32,kernel_size=(3,3),activation='relu',input_shape=input_shape)) model.add(Conv2D(54,(3,3),activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes,activation='softmax',name='preds')) model.compile(loss=keras.losses.categorical_crossentropy,optimizer=keras.optimizers.Adam(),metrics=['accuracy']) model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(x_test,y_test)) score=model.evaluate(x_test,y_test,verbose=0) print('Test loss:',score[0]) print('Test accuracy:',score[1]) model.save('model.h5') **#(2)用生成的mnist文件做测试:** from keras.models import load_model from vis.utils import utils from keras import activations model=load_model('model.h5') layer_idx=utils.find_layer_idx(model,'preds') model.layers[layer_idx].activation=activations.linear model = utils.apply_modifications(model) 报错:FileNotFoundError: [WinError 3] 系统找不到指定的路径。: '/tmp/curzzxs_.h5'
pycharm使用keras出现进度条信息多行打印
最近在用pycharm运行keras方面的代码时,会出现进度条多行打印问题,不知道是什么原因,但是我把代码放在Spyder上运行时,进度条是正常单行更新的,代码是深度学习的一个例程。在百度上也没搜到好的解决方法,恳请大家能帮忙解决这个问题, ``` from keras import layers,models from keras.datasets import mnist from keras.utils import to_categorical (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images = train_images.reshape((60000,28,28,1)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28,28,1)) test_images = test_images.astype('float32')/255 train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) model = models.Sequential() model.add(layers.Conv2D(32,(3,3),activation='relu',input_shape=(28,28,1))) model.add(layers.MaxPool2D(2,2)) model.add(layers.Conv2D(64,(3,3),activation='relu')) model.add(layers.MaxPool2D(2,2)) model.add(layers.Conv2D(64,(3,3),activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64,activation='relu')) model.add(layers.Dense(10,activation='softmax')) model.summary() model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(train_images,train_labels,epochs=6,batch_size=64) #test_loss,test_acc = model.evaluate(test_images,test_labels) # print(test_loss,test_acc) ``` ![图片说明](https://img-ask.csdn.net/upload/201910/07/1570448232_727191.png)
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误,查了一下说是gpu被占用了,从下面这里开始出问题的: ``` 2019-10-17 09:28:49.495166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6382 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) (60000, 28, 28) (60000, 10) 2019-10-17 09:28:51.275415: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cublas64_100.dll'; dlerror: cublas64_100.dll not found ``` ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277238_292620.png) 最后显示的问题: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277311_655722.png) 试了一下网上的方法,比如加代码: ``` gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) ``` 但最后提示: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277460_72752.png) 现在不知道要怎么解决了。新手想试下简单的数字识别,步骤也是按教程一步步来的,可能用的版本和教程不一样,我用的是刚下的:2.0tensorflow和以下: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277627_439100.png) 不知道会不会有版本问题,现在紧急求助各位大佬,还有没有其它可以尝试的方法。测试程序加法运算可以执行,数字识别图片运行的时候我看了下,GPU最大占有率才0.2%,下面是完整数字图片识别代码: ``` import os import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers, optimizers, datasets os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' #gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2) #sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) (x, y), (x_val, y_val) = datasets.mnist.load_data() x = tf.convert_to_tensor(x, dtype=tf.float32) / 255. y = tf.convert_to_tensor(y, dtype=tf.int32) y = tf.one_hot(y, depth=10) print(x.shape, y.shape) train_dataset = tf.data.Dataset.from_tensor_slices((x, y)) train_dataset = train_dataset.batch(200) model = keras.Sequential([ layers.Dense(512, activation='relu'), layers.Dense(256, activation='relu'), layers.Dense(10)]) optimizer = optimizers.SGD(learning_rate=0.001) def train_epoch(epoch): # Step4.loop for step, (x, y) in enumerate(train_dataset): with tf.GradientTape() as tape: # [b, 28, 28] => [b, 784] x = tf.reshape(x, (-1, 28 * 28)) # Step1. compute output # [b, 784] => [b, 10] out = model(x) # Step2. compute loss loss = tf.reduce_sum(tf.square(out - y)) / x.shape[0] # Step3. optimize and update w1, w2, w3, b1, b2, b3 grads = tape.gradient(loss, model.trainable_variables) # w' = w - lr * grad optimizer.apply_gradients(zip(grads, model.trainable_variables)) if step % 100 == 0: print(epoch, step, 'loss:', loss.numpy()) def train(): for epoch in range(30): train_epoch(epoch) if __name__ == '__main__': train() ``` 希望能有人给下建议或解决方法,拜谢!
ubuntu16.04用sublime加载不了cuDNN
ubuntu16.04用sublime加载不了cuDNN,但是用终端加载就不提示错误,就可以正确加载。 I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:105] Couldn't open CUDA library libcudnn.so. LD_LIBRARY_PATH: I tensorflow/stream_executor/cuda/cuda_dnn.cc:3448] Unable to load cuDNN DSO I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:925] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 新手,求大神帮忙
pycharm运行后显示Process finished with exit code 0
from __future__ import absolute_import from __future__ import division from __future__ import print_function import csv import os.path import time import numpy as np import tensorflow as tf import gpr import load_dataset import nngp tf.logging.set_verbosity(tf.logging.INFO) flags = tf.app.flags FLAGS = flags.FLAGS flags.DEFINE_string('hparams', '', 'Comma separated list of name=value hyperparameter pairs to' 'override the default setting.') flags.DEFINE_string('experiment_dir', '/tmp/nngp', 'Directory to put the experiment results.') flags.DEFINE_string('grid_path', 'pythonplace/nngp/grid_data', 'Directory to put or find the training data.') flags.DEFINE_integer('num_train', 1000, 'Number of training data.') flags.DEFINE_integer('num_eval', 1000, 'Number of evaluation data. Use 10_000 for full eval') flags.DEFINE_integer('seed', 1234, 'Random number seed for data shuffling') flags.DEFINE_boolean('save_kernel', False, 'Save Kernel do disk') flags.DEFINE_string('dataset', 'mnist', 'Which dataset to use ["mnist"]') flags.DEFINE_boolean('use_fixed_point_norm', False, 'Normalize input variance to fixed point variance') flags.DEFINE_integer('n_gauss', 501, 'Number of gaussian integration grid. Choose odd integer.') flags.DEFINE_integer('n_var', 501, 'Number of variance grid points.') flags.DEFINE_integer('n_corr', 500, 'Number of correlation grid points.') flags.DEFINE_integer('max_var', 100, 'Max value for variance grid.') flags.DEFINE_integer('max_gauss', 10, 'Range for gaussian integration.') def set_default_hparams(): return tf.contrib.training.HParams( nonlinearity='tanh', weight_var=1.3, bias_var=0.2, depth=2) def do_eval(sess, model, x_data, y_data, save_pred=False): """Run evaluation.""" gp_prediction, stability_eps = model.predict(x_data, sess) pred_1 = np.argmax(gp_prediction, axis=1) accuracy = np.sum(pred_1 == np.argmax(y_data, axis=1)) / float(len(y_data)) mse = np.mean(np.mean((gp_prediction - y_data)**2, axis=1)) pred_norm = np.mean(np.linalg.norm(gp_prediction, axis=1)) tf.logging.info('Accuracy: %.4f'%accuracy) tf.logging.info('MSE: %.8f'%mse) if save_pred: with tf.gfile.Open( os.path.join(FLAGS.experiment_dir, 'gp_prediction_stats.npy'), 'w') as f: np.save(f, gp_prediction) return accuracy, mse, pred_norm, stability_eps def run_nngp_eval(hparams, run_dir): """Runs experiments.""" tf.gfile.MakeDirs(run_dir) # Write hparams to experiment directory. with tf.gfile.GFile(run_dir + '/hparams', mode='w') as f: f.write(hparams.to_proto().SerializeToString()) tf.logging.info('Starting job.') tf.logging.info('Hyperparameters') tf.logging.info('---------------------') tf.logging.info(hparams) tf.logging.info('---------------------') tf.logging.info('Loading data') # Get the sets of images and labels for training, validation, and # # test on dataset. if FLAGS.dataset == 'mnist': (train_image, train_label, valid_image, valid_label, test_image, test_label) = load_dataset.load_mnist( num_train=FLAGS.num_train, mean_subtraction=True, random_roated_labels=False) else: raise NotImplementedError tf.logging.info('Building Model') if hparams.nonlinearity == 'tanh': nonlin_fn = tf.tanh elif hparams.nonlinearity == 'relu': nonlin_fn = tf.nn.relu else: raise NotImplementedError with tf.Session() as sess: # Construct NNGP kernel nngp_kernel = nngp.NNGPKernel( depth=hparams.depth, weight_var=hparams.weight_var, bias_var=hparams.bias_var, nonlin_fn=nonlin_fn, grid_path=FLAGS.grid_path, n_gauss=FLAGS.n_gauss, n_var=FLAGS.n_var, n_corr=FLAGS.n_corr, max_gauss=FLAGS.max_gauss, max_var=FLAGS.max_var, use_fixed_point_norm=FLAGS.use_fixed_point_norm) input("hello") # Construct Gaussian Process Regression model model = gpr.GaussianProcessRegression( train_image, train_label, kern=nngp_kernel) start_time = time.time() tf.logging.info('Training') # For large number of training points, we do not evaluate on full set to # save on training evaluation time. if FLAGS.num_train <= 5000: acc_train, mse_train, norm_train, final_eps = do_eval( sess, model, train_image[:FLAGS.num_eval], train_label[:FLAGS.num_eval]) tf.logging.info('Evaluation of training set (%d examples) took ' '%.3f secs'%( min(FLAGS.num_train, FLAGS.num_eval), time.time() - start_time)) else: acc_train, mse_train, norm_train, final_eps = do_eval( sess, model, train_image[:1000], train_label[:1000]) tf.logging.info('Evaluation of training set (%d examples) took ' '%.3f secs'%(1000, time.time() - start_time)) start_time = time.time() tf.logging.info('Validation') acc_valid, mse_valid, norm_valid, _ = do_eval( sess, model, valid_image[:FLAGS.num_eval], valid_label[:FLAGS.num_eval]) tf.logging.info('Evaluation of valid set (%d examples) took %.3f secs'%( FLAGS.num_eval, time.time() - start_time)) start_time = time.time() tf.logging.info('Test') acc_test, mse_test, norm_test, _ = do_eval( sess, model, test_image[:FLAGS.num_eval], test_label[:FLAGS.num_eval], save_pred=False) tf.logging.info('Evaluation of test set (%d examples) took %.3f secs'%( FLAGS.num_eval, time.time() - start_time)) metrics = { 'train_acc': float(acc_train), 'train_mse': float(mse_train), 'train_norm': float(norm_train), 'valid_acc': float(acc_valid), 'valid_mse': float(mse_valid), 'valid_norm': float(norm_valid), 'test_acc': float(acc_test), 'test_mse': float(mse_test), 'test_norm': float(norm_test), 'stability_eps': float(final_eps), } record_results = [ FLAGS.num_train, hparams.nonlinearity, hparams.weight_var, hparams.bias_var, hparams.depth, acc_train, acc_valid, acc_test, mse_train, mse_valid, mse_test, final_eps ] if nngp_kernel.use_fixed_point_norm: metrics['var_fixed_point'] = float(nngp_kernel.var_fixed_point_np[0]) record_results.append(nngp_kernel.var_fixed_point_np[0]) # Store data result_file = os.path.join(run_dir, 'results.csv') with tf.gfile.Open(result_file, 'a') as f: filewriter = csv.writer(f) filewriter.writerow(record_results) return metrics if __name__ == '__main__': # tf.app.run(main) hparams = set_default_hparams().parse(FLAGS.hparams) print("hparams:", hparams) x = FLAGS.experiment_dir print(x) run_nngp_eval(hparams, x)
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...
【JSON解析】浅谈JSONObject的使用
简介 在程序开发过程中,在参数传递,函数返回值等方面,越来越多的使用JSON。JSON(JavaScript Object Notation)是一种轻量级的数据交换格式,同时也易于机器解析和生成、易于理解、阅读和撰写,而且Json采用完全独立于语言的文本格式,这使得Json成为理想的数据交换语言。 JSON建构于两种结构: “名称/值”对的集合(A Collection of name/va...
程序员请照顾好自己,周末病魔差点一套带走我。
程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。
卸载 x 雷某度!GitHub 标星 1.5w+,从此我只用这款全能高速下载工具!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个喜欢在网上收集各种资源的蒟蒻… 网上资源眼花缭乱,下载的方式也同样千奇百怪,比如 BT 下载,磁力链接,网盘资源等等等等,下个资源可真不容易,不一样的方式要用不同的下载软件,因此某比较有名的 x 雷和某度网盘成了我经常使用的工具。 作为一个没有钱的穷鬼,某度网盘几十 kb 的下载速度让我...
只因接了一个电话,程序员被骗 30 万!
今天想给大家说一个刚刚发生在我身边的一起真实的诈骗经历,我的朋友因此被骗走30万。注:为了保护当事人隐私,部分情节进行了修改。1平安夜突来的电话开始以为就像普通的诈骗一样,想办法让你把钱...
我一个37岁的程序员朋友
周末了,人一旦没有点事情干,心里就瞎想,而且跟几个老男人坐在一起,更容易瞎想,我自己现在也是 30 岁了,也是无时无刻在担心自己的职业生涯,担心丢掉工作没有收入,担心身体机能下降,担心突...
python自动下载图片
近日闲来无事,总有一种无形的力量萦绕在朕身边,让朕精神涣散,昏昏欲睡。 可是,像朕这么有职业操守的社畜怎么能在上班期间睡瞌睡呢,我不禁陷入了沉思。。。。 突然旁边的IOS同事问:‘嘿,兄弟,我发现一个网站的图片很有意思啊,能不能帮我保存下来提升我的开发灵感?’ 作为一个坚强的社畜怎么能说自己不行呢,当时朕就不假思索的答应:‘oh, It’s simple. Wait for me for a ...
一名大专同学的四个问题
【前言】   收到一封来信,赶上各种事情拖了几日,利用今天要放下工作的时机,做个回复。   2020年到了,就以这一封信,作为开年标志吧。 【正文】   您好,我是一名现在有很多困惑的大二学生。有一些问题想要向您请教。   先说一下我的基本情况,高考失利,不想复读,来到广州一所大专读计算机应用技术专业。学校是偏艺术类的,计算机专业没有实验室更不用说工作室了。而且学校的学风也不好。但我很想在计算机领...
复习一周,京东+百度一面,不小心都拿了Offer
京东和百度一面都问了啥,面试官百般刁难,可惜我全会。
Java 14 都快来了,为什么还有这么多人固守Java 8?
从Java 9开始,Java版本的发布就让人眼花缭乱了。每隔6个月,都会冒出一个新版本出来,Java 10 , Java 11, Java 12, Java 13, 到2020年3月份,...
达摩院十大科技趋势发布:2020 非同小可!
【CSDN编者按】1月2日,阿里巴巴发布《达摩院2020十大科技趋势》,十大科技趋势分别是:人工智能从感知智能向认知智能演进;计算存储一体化突破AI算力瓶颈;工业互联网的超融合;机器间大规模协作成为可能;模块化降低芯片设计门槛;规模化生产级区块链应用将走入大众;量子计算进入攻坚期;新材料推动半导体器件革新;保护数据隐私的AI技术将加速落地;云成为IT技术创新的中心 。 新的画卷,正在徐徐展开。...
轻松搭建基于 SpringBoot + Vue 的 Web 商城应用
首先介绍下在本文出现的几个比较重要的概念: 函数计算(Function Compute): 函数计算是一个事件驱动的服务,通过函数计算,用户无需管理服务器等运行情况,只需编写代码并上传。函数计算准备计算资源,并以弹性伸缩的方式运行用户代码,而用户只需根据实际代码运行所消耗的资源进行付费。Fun: Fun 是一个用于支持 Serverless 应用部署的工具,能帮助您便捷地管理函数计算、API ...
讲真,这两个IDE插件,可以让你写出质量杠杠的代码
周末躺在床上看《拯救大兵瑞恩》 周末在闲逛的时候,发现了两个优秀的 IDE 插件,据说可以提高代码的质量,我就安装了一下,试了试以后发现,确实很不错,就推荐给大家。 01、Alibaba Java 代码规范插件 《阿里巴巴 Java 开发手册》,相信大家都不会感到陌生,其 IDEA 插件的下载次数据说达到了 80 万次,我今天又贡献了一次。嘿嘿。 该项目的插件地址: https://github....
Python+OpenCV实时图像处理
目录 1、导入库文件 2、设计GUI 3、调用摄像头 4、实时图像处理 4.1、阈值二值化 4.2、边缘检测 4.3、轮廓检测 4.4、高斯滤波 4.5、色彩转换 4.6、调节对比度 5、退出系统 初学OpenCV图像处理的小伙伴肯定对什么高斯函数、滤波处理、阈值二值化等特性非常头疼,这里给各位分享一个小项目,可通过摄像头实时动态查看各类图像处理的特点,也可对各位调参、测试...
2020年一线城市程序员工资大调查
人才需求 一线城市共发布岗位38115个,招聘120827人。 其中 beijing 22805 guangzhou 25081 shanghai 39614 shenzhen 33327 工资分布 2020年中国一线城市程序员的平均工资为16285元,工资中位数为14583元,其中95%的人的工资位于5000到20000元之间。 和往年数据比较: yea...
为什么猝死的都是程序员,基本上不见产品经理猝死呢?
相信大家时不时听到程序员猝死的消息,但是基本上听不到产品经理猝死的消息,这是为什么呢? 我们先百度搜一下:程序员猝死,出现将近700多万条搜索结果: 搜索一下:产品经理猝死,只有400万条的搜索结果,从搜索结果数量上来看,程序员猝死的搜索结果就比产品经理猝死的搜索结果高了一倍,而且从下图可以看到,首页里面的五条搜索结果,其实只有两条才是符合条件。 所以程序员猝死的概率真的比产品经理大,并不是错...
害怕面试被问HashMap?这一篇就搞定了!
声明:本文以jdk1.8为主! 搞定HashMap 作为一个Java从业者,面试的时候肯定会被问到过HashMap,因为对于HashMap来说,可以说是Java集合中的精髓了,如果你觉得自己对它掌握的还不够好,我想今天这篇文章会非常适合你,至少,看了今天这篇文章,以后不怕面试被问HashMap了 其实在我学习HashMap的过程中,我个人觉得HashMap还是挺复杂的,如果真的想把它搞得明明白...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
python爬取百部电影数据,我分析出了一个残酷的真相
2019年就这么匆匆过去了,就在前几天国家电影局发布了2019年中国电影市场数据,数据显示去年总票房为642.66亿元,同比增长5.4%;国产电影总票房411.75亿元,同比增长8.65%,市场占比 64.07%;城市院线观影人次17.27亿,同比增长0.64%。 看上去似乎是一片大好对不对?不过作为一名严谨求实的数据分析师,我从官方数据中看出了一点端倪:国产票房增幅都已经高达8.65%了,为什...
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
这些软件太强了,Windows必装!尤其程序员!
Windows可谓是大多数人的生产力工具,集娱乐办公于一体,虽然在程序员这个群体中都说苹果是信仰,但是大部分不都是从Windows过来的,而且现在依然有很多的程序员用Windows。 所以,今天我就把我私藏的Windows必装的软件分享给大家,如果有一个你没有用过甚至没有听过,那你就赚了????,这可都是提升你幸福感的高效率生产力工具哦! 走起!???? NO、1 ScreenToGif 屏幕,摄像头和白板...
阿里面试,面试官没想到一个ArrayList,我都能跟他扯半小时
我是真的没想到,面试官会这样问我ArrayList。
曾经优秀的人,怎么就突然不优秀了。
职场上有很多辛酸事,很多合伙人出局的故事,很多技术骨干被裁员的故事。说来模板都类似,曾经是名校毕业,曾经是优秀员工,曾经被领导表扬,曾经业绩突出,然而突然有一天,因为种种原因,被裁员了,...
大学四年因为知道了这32个网站,我成了别人眼中的大神!
依稀记得,毕业那天,我们导员发给我毕业证的时候对我说“你可是咱们系的风云人物啊”,哎呀,别提当时多开心啦????,嗯,我们导员是所有导员中最帅的一个,真的???? 不过,导员说的是实话,很多人都叫我大神的,为啥,因为我知道这32个网站啊,你说强不强????,这次是绝对的干货,看好啦,走起来! PS:每个网站都是学计算机混互联网必须知道的,真的牛杯,我就不过多介绍了,大家自行探索,觉得没用的,尽管留言吐槽吧???? 社...
良心推荐,我珍藏的一些Chrome插件
上次搬家的时候,发了一个朋友圈,附带的照片中不小心暴露了自己的 Chrome 浏览器插件之多,于是就有小伙伴评论说分享一下我觉得还不错的浏览器插件。 我下面就把我日常工作和学习中经常用到的一些 Chrome 浏览器插件分享给大家,随便一个都能提高你的“生活品质”和工作效率。 Markdown Here Markdown Here 可以让你更愉快的写邮件,由于支持 Markdown 直接转电子邮...
【程序人生】程序员接私活常用平台汇总
00. 目录 文章目录00. 目录01. 前言02. 程序员客栈03. 码市04. 猪八戒网05. 开源众包06. 智城外包网07. 实现网08. 猿急送09. 人人开发10. 开发邦11. 电鸭社区12. 快码13. 英选14. Upwork15. Freelancer16. Dribbble17. Remoteok18. Toptal19. AngelList20. Topcoder21. ...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
谁是华为扫地僧?
是的,华为也有扫地僧!2020年2月11-12日,“养在深闺人不知”的华为2012实验室扫地僧们,将在华为开发者大会2020(Cloud)上,和大家见面。到时,你可以和扫地僧们,吃一个洋...
立即提问