snowcake666 2017-10-31 05:44 采纳率: 0%
浏览 1864

小白的提问..用RNN做MNIST怎么测试test_data上的准确率

看的莫烦的教学视频,有一节他用RNN做了MNIST的练习,输出的accuracy只是在训练数据上的,我想测试一下在测试数据集上的accuracy,但是不知道怎么变换数据的格式...
纯属小白一个,还望大神看一眼我的问题啊
代码是这个样子的

 import tensorflow as tf
import input_data

#this is data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

#hyperparameters
learning_rate=0.001
training_iters=100000
batch_size=128

n_inputs=28 #data shape 28列,rnn做分成一个序列一个序列的
n_steps=28 #time steps 28行
n_hidden_unis=128 #neurons in hidden layer
n_classes=10 #classes 10

#tf Graph input
x = tf.placeholder("float", shape=[None, n_steps, n_inputs])
y = tf.placeholder("float", shape=[None, n_classes])

#Define weights
weights={
    'in':tf.Variable(tf.random_normal([n_inputs,n_hidden_unis])),
    'out':tf.Variable(tf.random_normal([n_hidden_unis,n_classes]))
}
biases={
    'in':tf.Variable(tf.constant(0.1,shape=[n_hidden_unis,])),
    'out':tf.Variable(tf.constant(0.1,shape=[n_classes,])),
}

def RNN(X,weights,biases):
    ####hidden layer for input to cell#####
    # X(128batch,28steps,28inputs) ==> (128*28, 28inputs)
    print(X)
    X= tf.reshape(X,[-1,n_inputs])
    print(X)
    # X_in ==>(128batch*28steps,28hidden)
    X_in= tf.matmul(X, weights['in'])+biases['in']
    print(X_in)
    # X_in ==>(128batch,28steps,28hidden)
    X_in= tf.reshape(X_in,[-1, n_steps, n_hidden_unis])
    print(X_in)

    ####cell#####
    lstm_cell= tf.nn.rnn_cell.BasicLSTMCell(n_hidden_unis, forget_bias=1.0, state_is_tuple=True)
    #cell fi divided into two parts(c_state, m_state)
    _init_state= lstm_cell.zero_state(batch_size,dtype="float")
    print(_init_state)

    outputs,states=tf.nn.dynamic_rnn(lstm_cell,X_in ,initial_state=_init_state,time_major=False)
    print(outputs,states)

    ####hidden layer for output as final results#####
    results=tf.matmul(states[1],weights['out'])+biases['out']
    print(results)
    return results

pred= RNN(x, weights, biases)

cost= tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))#logits=最后一层的输出,label
train_op=tf.train.AdamOptimizer(learning_rate).minimize(cost)
correct_pred=tf.equal(tf.argmax(pred,1),tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred,tf.float32))

init= tf.initialize_all_variables()

sess=tf.Session()
sess.run(init)
step=0
while step*batch_size<training_iters:
    batch_xs,batch_ys= mnist.train.next_batch(batch_size)
    batch_xs=batch_xs.reshape(batch_size,n_steps,n_inputs)
    sess.run([train_op],feed_dict={
        x:batch_xs,
        y:batch_ys})
    if step%20==0:
        print(sess.run(accuracy,feed_dict={
            x:batch_xs,
            y:batch_ys}))
    step=step+1

  • 写回答

1条回答

  • CodeKind 2017-10-31 06:22
    关注

    mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) 中 ‘'MNIST_data/" 替换成测试数据路径就好了

    评论

报告相同问题?

悬赏问题

  • ¥15 delta降尺度计算的一些细节,有偿
  • ¥15 Arduino红外遥控代码有问题
  • ¥15 数值计算离散正交多项式
  • ¥30 数值计算均差系数编程
  • ¥15 redis-full-check比较 两个集群的数据出错
  • ¥15 Matlab编程问题
  • ¥15 训练的多模态特征融合模型准确度很低怎么办
  • ¥15 kylin启动报错log4j类冲突
  • ¥15 超声波模块测距控制点灯,灯的闪烁很不稳定,经过调试发现测的距离偏大
  • ¥15 import arcpy出现importing _arcgisscripting 找不到相关程序