fashion_mnist识别准确率问题

fashion_mnist识别准确率一般为多少呢?我看好多人都是92%左右,但是我用一个网络达到了94%,想问问做过的小伙伴到底是多少?

#这是我的结果示意
x_shape: (60000, 28, 28)
y_shape: (60000,)
epoches:  0 val_acc:  0.4991 train_acc 0.50481665
epoches:  1 val_acc:  0.6765 train_acc 0.66735
epoches:  2 val_acc:  0.755 train_acc 0.7474
epoches:  3 val_acc:  0.7846 train_acc 0.77915
epoches:  4 val_acc:  0.798 train_acc 0.7936
epoches:  5 val_acc:  0.8082 train_acc 0.80365
epoches:  6 val_acc:  0.8146 train_acc 0.8107
epoches:  7 val_acc:  0.8872 train_acc 0.8872333
epoches:  8 val_acc:  0.896 train_acc 0.89348334
epoches:  9 val_acc:  0.9007 train_acc 0.8986
epoches:  10 val_acc:  0.9055 train_acc 0.90243334
epoches:  11 val_acc:  0.909 train_acc 0.9058833
epoches:  12 val_acc:  0.9112 train_acc 0.90868336
epoches:  13 val_acc:  0.9126 train_acc 0.91108334
epoches:  14 val_acc:  0.9151 train_acc 0.9139
epoches:  15 val_acc:  0.9172 train_acc 0.91595
epoches:  16 val_acc:  0.9191 train_acc 0.91798335
epoches:  17 val_acc:  0.9204 train_acc 0.91975
epoches:  18 val_acc:  0.9217 train_acc 0.9220333
epoches:  19 val_acc:  0.9252 train_acc 0.9234667
epoches:  20 val_acc:  0.9259 train_acc 0.92515
epoches:  21 val_acc:  0.9281 train_acc 0.9266667
epoches:  22 val_acc:  0.9289 train_acc 0.92826664
epoches:  23 val_acc:  0.9301 train_acc 0.93005
epoches:  24 val_acc:  0.9315 train_acc 0.93126667
epoches:  25 val_acc:  0.9322 train_acc 0.9328
epoches:  26 val_acc:  0.9331 train_acc 0.9339667
epoches:  27 val_acc:  0.9342 train_acc 0.93523335
epoches:  28 val_acc:  0.9353 train_acc 0.93665
epoches:  29 val_acc:  0.9365 train_acc 0.9379333
epoches:  30 val_acc:  0.9369 train_acc 0.93885
epoches:  31 val_acc:  0.9387 train_acc 0.9399
epoches:  32 val_acc:  0.9395 train_acc 0.9409
epoches:  33 val_acc:  0.94 train_acc 0.9417667
epoches:  34 val_acc:  0.9403 train_acc 0.94271666
epoches:  35 val_acc:  0.9409 train_acc 0.9435167
epoches:  36 val_acc:  0.9418 train_acc 0.94443333
epoches:  37 val_acc:  0.942 train_acc 0.94515
epoches:  38 val_acc:  0.9432 train_acc 0.9460667
epoches:  39 val_acc:  0.9443 train_acc 0.9468833
epoches:  40 val_acc:  0.9445 train_acc 0.94741666
epoches:  41 val_acc:  0.9462 train_acc 0.9482
epoches:  42 val_acc:  0.947 train_acc 0.94893336
epoches:  43 val_acc:  0.9472 train_acc 0.94946665
epoches:  44 val_acc:  0.948 train_acc 0.95028335
epoches:  45 val_acc:  0.9486 train_acc 0.95095
epoches:  46 val_acc:  0.9488 train_acc 0.9515833
epoches:  47 val_acc:  0.9492 train_acc 0.95213336
epoches:  48 val_acc:  0.9495 train_acc 0.9529833
epoches:  49 val_acc:  0.9498 train_acc 0.9537
val_acc:  0.9498


import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt

def to_onehot(y,num):
    lables = np.zeros([num,len(y)])
    for i in range(len(y)):
        lables[y[i],i] = 1
    return lables.T

# 预处理数据
mnist = keras.datasets.fashion_mnist
(train_images,train_lables),(test_images,test_lables) = mnist.load_data()

print('x_shape:',train_images.shape)
#(60000)
print('y_shape:',train_lables.shape)

X_train = train_images.reshape((-1,train_images.shape[1]*train_images.shape[1])) / 255.0
#X_train = tf.reshape(X_train,[-1,X_train.shape[1]*X_train.shape[2]])
Y_train = to_onehot(train_lables,10)
X_test = test_images.reshape((-1,test_images.shape[1]*test_images.shape[1])) / 255.0
Y_test = to_onehot(test_lables,10)

#双隐层的神经网络
input_nodes = 784
output_nodes = 10
layer1_nodes = 100
layer2_nodes = 50
batch_size = 100
learning_rate_base = 0.8
learning_rate_decay = 0.99
regularization_rate = 0.0000001
epochs = 50
mad = 0.99
learning_rate  = 0.005

# def inference(input_tensor,avg_class,w1,b1,w2,b2):
#     if avg_class == None:
#         layer1 = tf.nn.relu(tf.matmul(input_tensor,w1)+b1)
#         return tf.nn.softmax(tf.matmul(layer1,w2) + b2)
#     else:
#         layer1 = tf.nn.relu(tf.matmul(input_tensor,avg_class.average(w1)) + avg_class.average(b1))
#         return  tf.matual(layer1,avg_class.average(w2)) + avg_class.average(b2)

def train(mnist):
    X = tf.placeholder(tf.float32,[None,input_nodes],name = "input_x")
    Y = tf.placeholder(tf.float32,[None,output_nodes],name = "y_true")
    w1 = tf.Variable(tf.truncated_normal([input_nodes,layer1_nodes],stddev=0.1))
    b1 = tf.Variable(tf.constant(0.1,shape=[layer1_nodes]))
    w2 = tf.Variable(tf.truncated_normal([layer1_nodes,layer2_nodes],stddev=0.1))
    b2 = tf.Variable(tf.constant(0.1,shape=[layer2_nodes]))
    w3 = tf.Variable(tf.truncated_normal([layer2_nodes,output_nodes],stddev=0.1))
    b3 = tf.Variable(tf.constant(0.1,shape=[output_nodes]))

    layer1 = tf.nn.relu(tf.matmul(X,w1)+b1)
    A2 = tf.nn.relu(tf.matmul(layer1,w2)+b2)
    A3 = tf.nn.relu(tf.matmul(A2,w3)+b3)

    y_hat = tf.nn.softmax(A3)
#     y_hat = inference(X,None,w1,b1,w2,b2)

#     global_step = tf.Variable(0,trainable=False)
#     variable_averages = tf.train.ExponentialMovingAverage(mad,global_step)
#     varible_average_op = variable_averages.apply(tf.trainable_variables())

    #y = inference(x,variable_averages,w1,b1,w2,b2)
    cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=A3,labels=Y))
    regularizer = tf.contrib.layers.l2_regularizer(regularization_rate)

    regularization = regularizer(w1) + regularizer(w2) +regularizer(w3)
    loss = cross_entropy + regularization * regularization_rate

#     learning_rate = tf.train.exponential_decay(learning_rate_base,global_step,epchos,learning_rate_decay)

#     train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)
    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)


#     with tf.control_dependencies([train_step,varible_average_op]):
#         train_op = tf.no_op(name="train")


    correct_prediction = tf.equal(tf.argmax(y_hat,1),tf.argmax(Y,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
    total_loss = []
    val_acc = []
    total_train_acc = []
    x_Xsis = []

    with tf.Session() as sess:
        tf.global_variables_initializer().run()

        for i in range(epochs):
#             x,y = next_batch(X_train,Y_train,batch_size)
            batchs = int(X_train.shape[0] / batch_size + 1)
            loss_e = 0.
            for j in range(batchs):

                batch_x = X_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:]
                batch_y = Y_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:]
                sess.run(train_step,feed_dict={X:batch_x,Y:batch_y})
                loss_e += sess.run(loss,feed_dict={X:batch_x,Y:batch_y})
#             train_step.run(feed_dict={X:x,Y:y})
            validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test})
            train_acc = sess.run(accuracy,feed_dict={X:X_train,Y:Y_train})
            print("epoches: ",i,"val_acc: ",validate_acc,"train_acc",train_acc) 
            total_loss.append(loss_e / batch_size)
            val_acc.append(validate_acc)
            total_train_acc.append(train_acc)
            x_Xsis.append(i)
        validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test})
        print("val_acc: ",validate_acc)
    return (x_Xsis,total_loss,total_train_acc,val_acc)

result = train((X_train,Y_train,X_test,Y_test))

def plot_acc(total_train_acc,val_acc,x):
    plt.figure()
    plt.plot(x,total_train_acc,'--',color = "red",label="train_acc")
    plt.plot(x,val_acc,color="green",label="val_acc")
    plt.xlabel("Epoches")
    plt.ylabel("acc")
    plt.legend()
    plt.show()

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
tensorflow在第一次运行Fashion MNIST会下载数据集,应该网络不好中断了报错不知咋办?

```**tensorflow在第一次运行Fashion MNIST会下载数据集,应该网络不好中断了报错不知咋办?** 代码如下: !/usr/bin/python _*_ coding: utf-8 -*- from __future__ import print_function import tensorflow as tf import matplotlib as mpl import matplotlib.pyplot as plt %matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time from tensorflow import keras print (tf.__version__) print (sys.version_info) for module in mpl ,np, pd, sklearn, tf, keras: print (module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data() x_valid,x_train = x_train_all[:5000],x_train_all[5000:] y_valid,y_train = y_train_all[:5000],y_train_all[5000:] print (x_valid.shape, y_valid.shape) print (x_train.shape, y_train.shape) print (x_test.shape, y_test.shape) ``` ``` 报错如下: 2.1.0 sys.version_info(major=2, minor=7, micro=12, releaselevel='final', serial=0) matplotlib 2.2.5 numpy 1.16.6 pandas 0.24.2 sklearn 0.20.4 tensorflow 2.1.0 tensorflow_core.python.keras.api._v2.keras 2.2.4-tf Traceback (most recent call last): File "/home/join/test_demo/test2.py", line 26, in <module> (x_train_all,y_train_all),(x_test,y_test) = fashion_mnist.load_data() File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/keras/data sets/fashion_mnist.py", line 59, in load_data imgpath.read(), np.uint8, offset=16).reshape(len(y_train), 28, 28) File "/usr/lib/python2.7/gzip.py", line 261, in read self._read(readsize) File "/usr/lib/python2.7/gzip.py", line 315, in _read self._read_eof() File "/usr/lib/python2.7/gzip.py", line 354, in _read_eof hex(self.crc))) IOError: CRC check failed 0xa445bb78 != 0xe7f80d 3fL ``` ```

在Spyder界面中使用tensorflow进行fashion_mnist数据集学习,结果loss为非数,并且准确率一直未变

1.建立了一个3个全连接层的神经网络; 2.代码如下: ``` import matplotlib as mpl import matplotlib.pyplot as plt #%matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time import tensorflow as tf from tensorflow import keras print(tf.__version__) print(sys.version_info) for module in mpl, np, sklearn, tf, keras: print(module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data() x_valid, x_train = x_train_all[:5000], x_train_all[5000:] y_valid, y_train = y_train_all[:5000], y_train_all[5000:] #tf.keras.models.Sequential model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape= [28,28])) model.add(keras.layers.Dense(300, activation="relu")) model.add(keras.layers.Dense(100, activation="relu")) model.add(keras.layers.Dense(10,activation="softmax")) ###sparse为最后输出为index类型,如果为one hot类型,则不需加sparse model.compile(loss = "sparse_categorical_crossentropy",optimizer = "sgd", metrics = ["accuracy"]) #model.layers #model.summary() history = model.fit(x_train, y_train, epochs=10, validation_data=(x_valid,y_valid)) ``` 3.输出结果: ``` runfile('F:/new/new world/deep learning/tensorflow/ex2/tf_keras_classification_model.py', wdir='F:/new/new world/deep learning/tensorflow/ex2') 2.0.0 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0) matplotlib 3.1.1 numpy 1.16.5 sklearn 0.21.3 tensorflow 2.0.0 tensorflow_core.keras 2.2.4-tf Train on 55000 samples, validate on 5000 samples Epoch 1/10 WARNING:tensorflow:Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: 55000/55000 [==============================] - 3s 58us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 2/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 3/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 4/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 5/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 6/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 7/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 8/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 9/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 10/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 ```

自己制作的类似Fashion-MNIST数据集,怎么使用

在做想关项目,因为需要自己的数据集,因此我按照要求做了一个,如下 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574843320_334333.png) 用的是MXNet框架,jupyter notebook写 我自己的做法是把测试集和训练集用数组读取后包装 训练集如下,两个数组分别是图片像素和对应标签 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574843532_588917.png) 训练过程如下,为了能分别遍历又拆了train_iter为train_iter[0]、[1] ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574843838_858564.png) 接着在导入训练模型(第12行)时候出现问题,报错如下 ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574844074_977535.png) 搞了几天不明白这个数据类型,是导入数据集的方式错了还是、、、、 下面这个是载入Fashion-MNIST数据集的函数 没看明白,现在还在尝试,但是有大佬指导下就更好了(求~) ![图片说明](https://img-ask.csdn.net/upload/201911/27/1574844395_318556.png) 附代码 ``` def load_data_fashion_mnist(batch_size, resize=None, root=os.path.join( '~', '.mxnet', 'datasets', 'fashion-mnist')): root = os.path.expanduser(root) # 展开用户路径'~' transformer = [] if resize: transformer += [gdata.vision.transforms.Resize(resize)] transformer += [gdata.vision.transforms.ToTensor()] transformer = gdata.vision.transforms.Compose(transformer) mnist_train = gdata.vision.FashionMNIST(root=root, train=True) mnist_test = gdata.vision.FashionMNIST(root=root, train=False) num_workers = 0 if sys.platform.startswith('win32') else 4 train_iter = gdata.DataLoader( mnist_train.transform_first(transformer), batch_size, shuffle=True, num_workers=num_workers) test_iter = gdata.DataLoader( mnist_test.transform_first(transformer), batch_size, shuffle=False, num_workers=num_workers) return train_iter, test_iter batch_size = 128 # 如出现“out of memory”的报错信息,可减小batch_size或resize train_iter, test_iter = load_data_fashion_mnist(batch_size, resize=224) ```

如何解决运行Caffe的MNIST的实例出现的指针问题

我是ubuntu系统。下载好mnist数据后,在转换为LMDB时出现问题。 运行指令:sudo sh examples/mnist/create_mnist.sh 出现这样的问题:*** Error in `build/examples/mnist/convert_mnist_data.bin': munmap_chunk(): invalid pointer: 0x0000000000fc6240 *** 去检查./examples/mnist/目录下,本应生成train和test两个文件夹,此时只有train而没有test。 从网上搜索发现大家都没出现这样的问题,出现这个问题都是在自己编写程序时指针声明和释放出错,但是这里为什么会报错呢。对于这样的错误怎么解决呢? 完整的信息如下: Creating lmdb... I1113 15:57:00.470358 4735 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdb I1113 15:57:00.470620 4735 convert_mnist_data.cpp:88] A total of 60000 items. I1113 15:57:00.470633 4735 convert_mnist_data.cpp:89] Rows: 28 Cols: 28 I1113 15:57:07.320497 4735 convert_mnist_data.cpp:108] Processed 60000 files. *** Error in `build/examples/mnist/convert_mnist_data.bin': munmap_chunk(): invalid pointer: 0x0000000000fc6240 *** ======= Backtrace: ========= /lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f06569327e5] /lib/x86_64-linux-gnu/libc.so.6(cfree+0x1a8)[0x7f065693f698] /usr/lib/x86_64-linux-gnu/libprotobuf.so.9(_ZN6google8protobuf8internal28DestroyDefaultRepeatedFieldsEv+0x1f)[0x7f066094d8af] /usr/lib/x86_64-linux-gnu/libprotobuf.so.9(_ZN6google8protobuf23ShutdownProtobufLibraryEv+0x8b)[0x7f066094cb3b] /usr/lib/x86_64-linux-gnu/libmirprotobuf.so.3(+0x233b9)[0x7f062efc23b9] /lib64/ld-linux-x86-64.so.2(+0x10de7)[0x7f0667226de7] /lib/x86_64-linux-gnu/libc.so.6(+0x39ff8)[0x7f06568f4ff8] /lib/x86_64-linux-gnu/libc.so.6(+0x3a045)[0x7f06568f5045] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf7)[0x7f06568db837] build/examples/mnist/convert_mnist_data.bin[0x4025b9]

caffe运行get_mnist报错,路径是对的,不知道为什么报这个错

![图片说明](https://img-ask.csdn.net/upload/201711/14/1510638957_464475.png)

如何自己做一个类似Fashion-MNIST的数据集

新人刚开始学习MXNet框架发现 用来训练和测试的数据集都是直接调用MNIST或者Fashion-MNIST 假如我想做一个动物识别的卷积神经网络 我有几种动物的图片各1000张 怎么样才能自己做一个数据集呢?

kaggle fashion-mnist.csv python的问题

本人python小白,在网上东拼西凑弄的代码 ``` >>> def toInt(array): array=mat(array) m,n=shape(array) newArray=zeros((m,n)) for i in range(m): for j in range(n): newArray[i,j]=int(array[i,j]) return newArray >>> def nomalizing(array): m,n=shape(array) for i in range(m): for j in range(n): if array[i,j]!=0: array[i,j]=1 return array >>> import csv >>> def loadTrainData(): l=[] with open('fashion-mnist_train.csv') as file: lines=csv.reader(file) for line in lines: l.append(line) l.remove(l[0]) l=array(l) label=l[:,0] data=l[:,1:] return nomalizing(toInt(data)),toInt(label) >>> def loadTestData(): l=[] with open('test_data.csv') as file: lines=csv.reader(file) for line in lines: l.append(line) l.remove(l[0]) l=array(l) data=l[:,1:] return nomalizing(toInt(data)) >>> def loadTestResult(): l=[] with open('sample_submission.csv') as file: lines=csv.reader(file) for line in lines: l.append(line) l.remove(l[0]) label=array(l) return toInt(label[:,1]) >>> def classify0(inX, dataSet, labels, k): inX=mat(inX) dataSet=mat(dataSet) labels=mat(labels) dataSetSize = dataSet.shape[0] diffMat = tile(inX, (dataSetSize,1)) - dataSet sqDiffMat = array(diffMat)**2 sqDistances = sqDiffMat.sum(axis=1) distances = sqDistances**0.5 sortedDistIndicies = distances.argsort() classCount={} for i in range(k): voteIlabel = labels[0,sortedDistIndicies[i]] classCount[voteIlabel] = classCount.get(voteIlabel,0) + 1 sortedClassCount = sorted(classCount.items(), key=operator.itemgetter(1), reverse=True) return sortedClassCount[0][0] >>> import KNN >>> from numpy import * >>> import operator >>> def handwritingClassTest(): trainData,trainLabel=loadTrainData() testData=loadTestData() testLabel=loadTestResult() m,n=shape(testData) errorCount=0 resultList=[] for i in range(m): classifierResult = classify0(testData[i], trainData, trainLabel, 5) resultList.append(classifierResult) print ("the classifier came back with: %d, the real answer is: %d") % (classifierResult, testLabel[0,i]) if (classifierResult != testLabel[0,i]): errorCount += 1.0 print ("\nthe total number of errors is: %d") % errorCount print ("\nthe total error rate is: %f") % (errorCount/float(m)) saveResult(resultList) >>> handwritingClassTest() ``` 运行了3个多小时,运行过程中的图片如下![图片说明](https://img-ask.csdn.net/upload/201710/11/1507704833_298439.png) 结果如下 ![图片说明](https://img-ask.csdn.net/upload/201710/11/1507704902_78629.png) result.csv也在桌面上显示了,但是为0字节 求问各位大神,如何修改code呢?是不是再次运行还得3个小时呀、

pycharm运行mnist_show.py出现如下问题,

# 这是深度学习入门这本书里的一段代码,请问这个问题是什么意思以及怎样解决? 报错如下:(下面有源代码)Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)] on win32 runfile('E:/PycharmProjects/deep-learning-from-scratch-master/ch03/mnist_show.py', wdir='E:/PycharmProjects/deep-learning-from-scratch-master/ch03') Converting train-images-idx3-ubyte.gz to NumPy Array ... Traceback (most recent call last): File "D:\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-eab209ee1d7f>", line 1, in <module> runfile('E:/PycharmProjects/deep-learning-from-scratch-master/ch03/mnist_show.py', wdir='E:/PycharmProjects/deep-learning-from-scratch-master/ch03') File "D:\Program Files\JetBrains\PyCharm 2019.1.1\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "D:\Program Files\JetBrains\PyCharm 2019.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "E:/PycharmProjects/deep-learning-from-scratch-master/ch03/mnist_show.py", line 13, in <module> (x_train, t_train), (x_test, t_test) = load_mnist(flatten=True, normalize=False) File "E:\PycharmProjects\deep-learning-from-scratch-master\dataset\mnist.py", line 106, in load_mnist init_mnist() File "E:\PycharmProjects\deep-learning-from-scratch-master\dataset\mnist.py", line 76, in init_mnist dataset = _convert_numpy() 源代码为:# coding: utf-8 mnist_show.py:::: import sys, os sys.path.append(os.pardir) # 为了导入父目录的文件而进行的设定 import numpy as np from dataset.mnist import load_mnist from PIL import Image def img_show(img): pil_img = Image.fromarray(np.uint8(img)) pil_img.show() (x_train, t_train), (x_test, t_test) = load_mnist(flatten=True, normalize=False) img = x_train[0] label = t_train[0] print(label) # 5 print(img.shape) # (784,) img = img.reshape(28, 28) # 把图像的形状变为原来的尺寸 print(img.shape) # (28, 28) img_show(img) mnist.py::: # coding: utf-8 try: import urllib.request except ImportError: raise ImportError('You should use Python 3.x') import os.path import gzip import pickle import os import numpy as np url_base = 'http://yann.lecun.com/exdb/mnist/' key_file = { 'train_img':'train-images-idx3-ubyte.gz', 'train_label':'train-labels-idx1-ubyte.gz', 'test_img':'t10k-images-idx3-ubyte.gz', 'test_label':'t10k-labels-idx1-ubyte.gz' } dataset_dir = os.path.dirname(os.path.abspath(__file__)) save_file = dataset_dir + "/mnist.pkl" train_num = 60000 test_num = 10000 img_dim = (1, 28, 28) img_size = 784 def _download(file_name): file_path = dataset_dir + "/" + file_name if os.path.exists(file_path): return print("Downloading " + file_name + " ... ") urllib.request.urlretrieve(url_base + file_name, file_path) print("Done") def download_mnist(): for v in key_file.values(): _download(v) def _load_label(file_name): file_path = dataset_dir + "/" + file_name print("Converting " + file_name + " to NumPy Array ...") with gzip.open(file_path, 'rb') as f: labels = np.frombuffer(f.read(), np.uint8, offset=8) print("Done") return labels def _load_img(file_name): file_path = dataset_dir + "/" + file_name print("Converting " + file_name + " to NumPy Array ...") with gzip.open(file_path, 'rb') as f: data = np.frombuffer(f.read(), np.uint8, offset=16) data = data.reshape(-1, img_size) print("Done") return data def _convert_numpy(): dataset = {} dataset['train_img'] = _load_img(key_file['train_img']) dataset['train_label'] = _load_label(key_file['train_label']) dataset['test_img'] = _load_img(key_file['test_img']) dataset['test_label'] = _load_label(key_file['test_label']) return dataset def init_mnist(): download_mnist() dataset = _convert_numpy() print("Creating pickle file ...") with open(save_file, 'wb') as f: pickle.dump(dataset, f, -1) print("Done!") def _change_one_hot_label(X): T = np.zeros((X.size, 10)) for idx, row in enumerate(T): row[X[idx]] = 1 return T def load_mnist(normalize=True, flatten=True, one_hot_label=False): """读入MNIST数据集 Parameters ---------- normalize : 将图像的像素值正规化为0.0~1.0 one_hot_label : one_hot_label为True的情况下,标签作为one-hot数组返回 one-hot数组是指[0,0,1,0,0,0,0,0,0,0]这样的数组 flatten : 是否将图像展开为一维数组 Returns ------- (训练图像, 训练标签), (测试图像, 测试标签) """ if not os.path.exists(save_file): init_mnist() with open(save_file, 'rb') as f: dataset = pickle.load(f) if normalize: for key in ('train_img', 'test_img'): dataset[key] = dataset[key].astype(np.float32) dataset[key] /= 255.0 if one_hot_label: dataset['train_label'] = _change_one_hot_label(dataset['train_label']) dataset['test_label'] = _change_one_hot_label(dataset['test_label']) if not flatten: for key in ('train_img', 'test_img'): dataset[key] = dataset[key].reshape(-1, 1, 28, 28) return (dataset['train_img'], dataset['train_label']), (dataset['test_img'], dataset['test_label']) if __name__ == '__main__': init_mnist()

mnist教程中使用自己的数据,load_data该如何定义?

在学习mnist时使用官方数据包, 换成自己的数据集,(x_train, y_train)=mnist.load_data()代码中的mnist该怎样替换? 直接删除mnist,提示load_data未定义,自己随机添加一个数据名例如“s”,则报错提示s未定义,请问该怎么修改?

jupyter notebook画图后,最上方显示空白,该怎么取消

![图片说明](https://img-ask.csdn.net/upload/202002/23/1582454720_773443.jpg) ![图片说明](https://img-ask.csdn.net/upload/202002/23/1582454958_871324.jpg) jupyter notebook里面画图之后,页面最上方会出现这种空白,应该怎么取消呢?谢谢 ``` X, y = iter(test_iter).next() true_labels = d2l.get_fashion_mnist_labels(y.numpy()) pred_labels = d2l.get_fashion_mnist_labels(net(X).argmax(dim=1).numpy()) titles = [true + '\n' + pred for true, pred in zip(true_labels, pred_labels)] d2l.show_fashion_mnist(X[0:9], titles[0:9]) ```

在python上使用knn算法识别mnist。正确率只有27%。求查错,自己看了好几天都找不出来哪出问题了

在python上使用knn算法识别mnist。正确率只有27%。求查错,自己看了好几天都找不出来哪出问题了 ``` # -*- coding: UTF-8 -*- from __future__ import division import os import struct import numpy as np import data import heapq '''knn 求距离公式''' def euc(vec1, vec2): npvec1, npvec2 = np.array(vec1), np.array(vec2) return ((npvec1-npvec2)**2).sum() '''data.image_data是mnist数据集,b是将这个数据集分成60000份''' a=np.array([data.image_data]) b=a.reshape((60000,784)) '''data.image_test_data是mnist测试集,d是将这个数据集分成10000份''' c=np.array([data.image_test_data]) d=c.reshape((10000,784)) '''i是测试次数,y是正确的次数''' i=0 y=0 while i < 10000: list1=[] list2=[] '''计算距离,并放入list1''' for x in b: list1.append(euc(d[i],x)) '''从list1里选3个最小的''' result = map(list1.index, heapq.nsmallest(11, list1)) result.sort() for x in result: x1=data.label_data[x] list2.append(x1) if data.label_test_data[i]==max(set(list2), key=list2.count): '''用百分比显示出正确率''' y=y+1 print("correct",i+1,"%.4f%%" % (y/(i+1)*100)) else: print("not correct",i+1) i=i+1 ``` ![图片说明](https://img-ask.csdn.net/upload/201908/03/1564832452_390239.png)

手写数字识别mnist测试集上正确率很高自己写的数字识别很差

手写数字识别mnist测试集上正确率很高,自己用画图软件写的数字为什么识别很差

求大神指点!命令行运行python的调用brian包的SNN网络程序报错

求大神指点!用命令行运行python就一直报这个错,安装vcforpython也不行。。 E:\python\stdp-mnist-master>python "Diehl&Cook_spiking_MNIST.py" C:\Python27\lib\site-packages\brian_no_units.py:4: UserWarning: Turning off units warnings.warn("Turning off units") time needed to load training set: 13.7289998531 time needed to load test set: 1.8220000267 brian.stateupdater: WARNING Using codegen CStateUpdater brian.stateupdater: WARNING Using codegen CStateUpdater create neuron group A create recurrent connections (400L, 3L) ./weights/../random/AeAi.npy (160000L, 3L) ./weights/../random/AiAe.npy create monitors for A create connections between X and A (313600L, 3L) ./weights/XeAe.npy Looking for python27.dll Looking for python27.dll Looking for python27.dll brian.experimental.codegen.stateupdaters: WARNING C compilation failed, falling back on Python. Traceback (most recent call last): File "Diehl&Cook_spiking_MNIST.py", line 455, in <module> b.run(single_example_time, report='text') File "C:\Python27\lib\site-packages\brian\network.py", line 938, in run report=report, report_period=report_period) File "C:\Python27\lib\site-packages\brian\network.py", line 574, in run self.update() File "C:\Python27\lib\site-packages\brian\network.py", line 518, in update f() File "C:\Python27\lib\site-packages\brian\neurongroup.py", line 513, in update self.LS.push(spikes) # Store spikes File "C:\Python27\lib\site-packages\brian\utils\ccircular\ccircular.py", line 128, in push def push(self, *args): return _ccircular.SpikeContainer_push(self, *args) TypeError: Array of type 'long' required. Array of type 'long long' given

用CGAN可以生成指定类别且质量不错的MNIST图像,但是将这些图像输入到预训练好的MNIST分类模型中准确率很低,请问这是什么原因?

我在MNIST上训练了CGAN,可以生成我认为质量很不错的图像,如下所示: ![图片说明](https://img-ask.csdn.net/upload/202004/26/1587893017_868256.png) 这些图像看起来也挺真实的,把他们喂入一个预训练好的MNIST分类模型(测试集上准确率达到98.5%)。某些类别的fake image能表现出90%以上的分类准确率(如1,5,7),而某些类别的分类准确率很低,只有个位数的准确率(如3,4,8)。 我的理解是某些类别生成的fake图像虽然看似真实,但是有一些潜在的特征与real image有差别。 想问下大家对于这个问题怎么看呀,救救孩子

小白的提问..用RNN做MNIST怎么测试test_data上的准确率

看的莫烦的教学视频,有一节他用RNN做了MNIST的练习,输出的accuracy只是在训练数据上的,我想测试一下在测试数据集上的accuracy,但是不知道怎么变换数据的格式... 纯属小白一个,还望大神看一眼我的问题啊 代码是这个样子的 ``` import tensorflow as tf import input_data #this is data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) #hyperparameters learning_rate=0.001 training_iters=100000 batch_size=128 n_inputs=28 #data shape 28列,rnn做分成一个序列一个序列的 n_steps=28 #time steps 28行 n_hidden_unis=128 #neurons in hidden layer n_classes=10 #classes 10 #tf Graph input x = tf.placeholder("float", shape=[None, n_steps, n_inputs]) y = tf.placeholder("float", shape=[None, n_classes]) #Define weights weights={ 'in':tf.Variable(tf.random_normal([n_inputs,n_hidden_unis])), 'out':tf.Variable(tf.random_normal([n_hidden_unis,n_classes])) } biases={ 'in':tf.Variable(tf.constant(0.1,shape=[n_hidden_unis,])), 'out':tf.Variable(tf.constant(0.1,shape=[n_classes,])), } def RNN(X,weights,biases): ####hidden layer for input to cell##### # X(128batch,28steps,28inputs) ==> (128*28, 28inputs) print(X) X= tf.reshape(X,[-1,n_inputs]) print(X) # X_in ==>(128batch*28steps,28hidden) X_in= tf.matmul(X, weights['in'])+biases['in'] print(X_in) # X_in ==>(128batch,28steps,28hidden) X_in= tf.reshape(X_in,[-1, n_steps, n_hidden_unis]) print(X_in) ####cell##### lstm_cell= tf.nn.rnn_cell.BasicLSTMCell(n_hidden_unis, forget_bias=1.0, state_is_tuple=True) #cell fi divided into two parts(c_state, m_state) _init_state= lstm_cell.zero_state(batch_size,dtype="float") print(_init_state) outputs,states=tf.nn.dynamic_rnn(lstm_cell,X_in ,initial_state=_init_state,time_major=False) print(outputs,states) ####hidden layer for output as final results##### results=tf.matmul(states[1],weights['out'])+biases['out'] print(results) return results pred= RNN(x, weights, biases) cost= tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))#logits=最后一层的输出,label train_op=tf.train.AdamOptimizer(learning_rate).minimize(cost) correct_pred=tf.equal(tf.argmax(pred,1),tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct_pred,tf.float32)) init= tf.initialize_all_variables() sess=tf.Session() sess.run(init) step=0 while step*batch_size<training_iters: batch_xs,batch_ys= mnist.train.next_batch(batch_size) batch_xs=batch_xs.reshape(batch_size,n_steps,n_inputs) sess.run([train_op],feed_dict={ x:batch_xs, y:batch_ys}) if step%20==0: print(sess.run(accuracy,feed_dict={ x:batch_xs, y:batch_ys})) step=step+1 ```

我的mnist运行报错,请问是那出现问题了?

from __future__ import absolute_import from __future__ import division from __future__ import print_function import argparse #解析训练和检测数据模块 import sys from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf FLAGS = None def main(_): # Import data mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784]) #此函数可以理解为形参,用于定义过程,在执行的时候再赋具体的值 W = tf.Variable(tf.zeros([784, 10])) # tf.zeros表示所有的维度都为0 b = tf.Variable(tf.zeros([10])) y = tf.matmul(x, W) + b #对应每个分类概率值。 # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10]) # The raw formulation of cross-entropy, # # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)), # reduction_indices=[1])) # # can be numerically unstable. # # So here we use tf.nn.softmax_cross_entropy_with_logits on the raw # outputs of 'y', and then average across the batch. cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) sess = tf.InteractiveSession() tf.global_variables_initializer().run() # Train for _ in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--data_dir', type=str, default='/tmp/tensorflow/mnist/input_data', help='Directory for storing input data') FLAGS, unparsed = parser.parse_known_args() tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) ``` ```下面是报错: TimeoutError Traceback (most recent call last) ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args) 1317 h.request(req.get_method(), req.selector, req.data, headers, -> 1318 encode_chunked=req.has_header('Transfer-encoding')) 1319 except OSError as err: # timeout error ~\Anaconda3\envs\tensorflow\lib\http\client.py in request(self, method, url, body, headers, encode_chunked) 1238 """Send a complete request to the server.""" -> 1239 self._send_request(method, url, body, headers, encode_chunked) 1240 ~\Anaconda3\envs\tensorflow\lib\http\client.py in _send_request(self, method, url, body, headers, encode_chunked) 1284 body = _encode(body, 'body') -> 1285 self.endheaders(body, encode_chunked=encode_chunked) 1286 ~\Anaconda3\envs\tensorflow\lib\http\client.py in endheaders(self, message_body, encode_chunked) 1233 raise CannotSendHeader() -> 1234 self._send_output(message_body, encode_chunked=encode_chunked) 1235 ~\Anaconda3\envs\tensorflow\lib\http\client.py in _send_output(self, message_body, encode_chunked) 1025 del self._buffer[:] -> 1026 self.send(msg) 1027 ~\Anaconda3\envs\tensorflow\lib\http\client.py in send(self, data) 963 if self.auto_open: --> 964 self.connect() 965 else: ~\Anaconda3\envs\tensorflow\lib\http\client.py in connect(self) 1399 self.sock = self._context.wrap_socket(self.sock, -> 1400 server_hostname=server_hostname) 1401 if not self._context.check_hostname and self._check_hostname: ~\Anaconda3\envs\tensorflow\lib\ssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session) 400 server_hostname=server_hostname, --> 401 _context=self, _session=session) 402 ~\Anaconda3\envs\tensorflow\lib\ssl.py in __init__(self, sock, keyfile, certfile, server_side, cert_reqs, ssl_version, ca_certs, do_handshake_on_connect, family, type, proto, fileno, suppress_ragged_eofs, npn_protocols, ciphers, server_hostname, _context, _session) 807 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets") --> 808 self.do_handshake() 809 ~\Anaconda3\envs\tensorflow\lib\ssl.py in do_handshake(self, block) 1060 self.settimeout(None) -> 1061 self._sslobj.do_handshake() 1062 finally: ~\Anaconda3\envs\tensorflow\lib\ssl.py in do_handshake(self) 682 """Start the SSL/TLS handshake.""" --> 683 self._sslobj.do_handshake() 684 if self.context.check_hostname: TimeoutError: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。 During handling of the above exception, another exception occurred: URLError Traceback (most recent call last) <ipython-input-1-eaf9732201f9> in <module>() 57 help='Directory for storing input data') 58 FLAGS, unparsed = parser.parse_known_args() ---> 59 tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py in run(main, argv) 46 # Call the main function, passing through any arguments 47 # to the final program. ---> 48 _sys.exit(main(_sys.argv[:1] + flags_passthrough)) 49 50 <ipython-input-1-eaf9732201f9> in main(_) 15 def main(_): 16 # Import data ---> 17 mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) 18 19 # Create the model ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py in read_data_sets(train_dir, fake_data, one_hot, dtype, reshape, validation_size, seed) 238 239 local_file = base.maybe_download(TRAIN_LABELS, train_dir, --> 240 SOURCE_URL + TRAIN_LABELS) 241 with open(local_file, 'rb') as f: 242 train_labels = extract_labels(f, one_hot=one_hot) ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py in maybe_download(filename, work_directory, source_url) 206 filepath = os.path.join(work_directory, filename) 207 if not gfile.Exists(filepath): --> 208 temp_file_name, _ = urlretrieve_with_retry(source_url) 209 gfile.Copy(temp_file_name, filepath) 210 with gfile.GFile(filepath) as f: ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py in wrapped_fn(*args, **kwargs) 163 for delay in delays(): 164 try: --> 165 return fn(*args, **kwargs) 166 except Exception as e: # pylint: disable=broad-except) 167 if is_retriable is None: ~\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py in urlretrieve_with_retry(url, filename) 188 @retry(initial_delay=1.0, max_delay=16.0, is_retriable=_is_retriable) 189 def urlretrieve_with_retry(url, filename=None): --> 190 return urllib.request.urlretrieve(url, filename) 191 192 ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in urlretrieve(url, filename, reporthook, data) 246 url_type, path = splittype(url) 247 --> 248 with contextlib.closing(urlopen(url, data)) as fp: 249 headers = fp.info() 250 ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context) 221 else: 222 opener = _opener --> 223 return opener.open(url, data, timeout) 224 225 def install_opener(opener): ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in open(self, fullurl, data, timeout) 524 req = meth(req) 525 --> 526 response = self._open(req, data) 527 528 # post-process response ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in _open(self, req, data) 542 protocol = req.type 543 result = self._call_chain(self.handle_open, protocol, protocol + --> 544 '_open', req) 545 if result: 546 return result ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in _call_chain(self, chain, kind, meth_name, *args) 502 for handler in handlers: 503 func = getattr(handler, meth_name) --> 504 result = func(*args) 505 if result is not None: 506 return result ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in https_open(self, req) 1359 def https_open(self, req): 1360 return self.do_open(http.client.HTTPSConnection, req, -> 1361 context=self._context, check_hostname=self._check_hostname) 1362 1363 https_request = AbstractHTTPHandler.do_request_ ~\Anaconda3\envs\tensorflow\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args) 1318 encode_chunked=req.has_header('Transfer-encoding')) 1319 except OSError as err: # timeout error -> 1320 raise URLError(err) 1321 r = h.getresponse() 1322 except: URLError: <urlopen error [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。> In [ ]:

tensorflow CNN训练mnist数据集后识别自己写的数字效果不好

自己搭建的cnn,用mnist数据集训练的模型,准确率大概在97%,但是用手机拍了几张手写照片,灰度化之后用模型测试发现效果很差。。。0给认成了8,不知道为什么,有没有遇到类似问题的朋友 模型参考的tensorflow 1.0 学习:用CNN进行图像分类 - denny402 - 博客园 https://www.cnblogs.com/denny402/p/6931338.html

mxnet在win10下安装的一些问题

![图片说明](https://img-ask.csdn.net/upload/201812/08/1544257682_148216.png) # 本人小白,按照沐神的教程安装,结果在调用上就卡住了,求大神解答。。

mnist 数据库 python 加载出错 !!

RT. Anaconda python 3.4 中 加载数据出错,显示;UnicodeDecodeError: 'ascii' codec can't decode byte 0x90 in position 614: ordinal not in range(128) 代码如下: # dataset='mnist.pkl.gz' f = gzip.open(dataset, 'rb') train_set, valid_set, test_set = pickle.load(f) # 跪求四方土地,诸位大神,在此小憩片刻,帮小弟解疑答惑,感激不尽啊。

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

点沙成金:英特尔芯片制造全过程揭密

“亚马逊丛林里的蝴蝶扇动几下翅膀就可能引起两周后美国德州的一次飓风……” 这句人人皆知的话最初用来描述非线性系统中微小参数的变化所引起的系统极大变化。 而在更长的时间尺度内,我们所生活的这个世界就是这样一个异常复杂的非线性系统…… 水泥、穹顶、透视——关于时间与技艺的蝴蝶效应 公元前3000年,古埃及人将尼罗河中挖出的泥浆与纳特龙盐湖中的矿物盐混合,再掺入煅烧石灰石制成的石灰,由此得来了人...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

外包程序员的幸福生活

今天给你们讲述一个外包程序员的幸福生活。男主是Z哥,不是在外包公司上班的那种,是一名自由职业者,接外包项目自己干。接下来讲的都是真人真事。 先给大家介绍一下男主,Z哥,老程序员,是我十多年前的老同事,技术大牛,当过CTO,也创过业。因为我俩都爱好喝酒、踢球,再加上住的距离不算远,所以一直也断断续续的联系着,我对Z哥的状况也有大概了解。 Z哥几年前创业失败,后来他开始干起了外包,利用自己的技术能...

C++11:一些微小的变化(新的数据类型、template表达式内的空格、nullptr、std::nullptr_t)

本文介绍一些C++的两个新特性,它们虽然微小,但对你的编程十分重要 一、Template表达式内的空格 C++11标准之前建议在“在两个template表达式的闭符之间放一个空格”的要求已经过时了 例如: vector&lt;list&lt;int&gt; &gt;; //C++11之前 vector&lt;list&lt;int&gt;&gt;; //C++11 二、nullptr ...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

深入剖析Springboot启动原理的底层源码,再也不怕面试官问了!

大家现在应该都对Springboot很熟悉,但是你对他的启动原理了解吗?

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

【阿里P6面经】二本,curd两年,疯狂复习,拿下阿里offer

二本的读者,在老东家不断学习,最后逆袭

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《经典算法案例》01-08:如何使用质数设计扫雷(Minesweeper)游戏

我们都玩过Windows操作系统中的经典游戏扫雷(Minesweeper),如果把质数当作一颗雷,那么,表格中红色的数字哪些是雷(质数)?您能找出多少个呢?文中用列表的方式罗列了10000以内的自然数、质数(素数),6的倍数等,方便大家观察质数的分布规律及特性,以便对算法求解有指导意义。另外,判断质数是初学算法,理解算法重要性的一个非常好的案例。

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

正确选择比瞎努力更重要!

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

微软为一人收购一公司?破解索尼程序、写黑客小说,看他彪悍的程序人生!...

作者 | 伍杏玲出品 | CSDN(ID:CSDNnews)格子衬衫、常掉发、双肩包、修电脑、加班多……这些似乎成了大众给程序员的固定标签。近几年流行的“跨界风”开始刷新人们对程序员的...

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

我说我懂多线程,面试官立马给我发了offer

不小心拿了几个offer,有点烦

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

立即提问
相关内容推荐