马超的博客 2017-03-17 11:43 采纳率: 20%
浏览 2607
已结题

tensorflow实现多层感知机,出现内存泄漏!

 # coding: UTF-8
# TensorFlow实现Softmax Regression识别手写数字(多层感知机)
import tensorflow as tf

########加载数据集########
from tensorflow.examples.tutorials.mnist import input_data 
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

sess = tf.InteractiveSession()
in_units = 784
h1_units = 300

w1 = tf.Variable(tf.truncated_normal([in_units,h1_units],stddev=0.1))

b1 = tf.Variable(tf.zeros([h1_units]))

w2 = tf.Variable(tf.zeros([h1_units,10]))
b2 = tf.Variable(tf.zeros([10]))

x = tf.placeholder(tf.float32,[None,in_units])
keep_prob = tf.placeholder(tf.float32)

########定义模型结构########
hidden1 = tf.nn.relu(tf.matmul(x,w1)+b1)
hidden1_drop = tf.nn.dropout(hidden1,keep_prob)
y = tf.nn.softmax(tf.matmul(hidden1_drop,w2) + b2)

########定义损失函数和选择优化器来优化loss########
y_ = tf.placeholder(tf.float32,[None,10]) 
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y),reduction_indices = [1]))
train_step = tf.train.AdagradOptimizer(0,3).minimize(cross_entropy)

tf.global_variables_initializer().run()
for i in range(3000):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    train_step.run({x: batch_xs, y_: batch_ys,keep_prob:0.75})

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))  
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval({x:mnist.test.images,y_:mnist.test.labels,keep_prob:1.0}))
  • 写回答

1条回答 默认 最新

  • weixin_37951569 2017-03-17 11:50
    关注

    哥们你代码里重复编译了

    评论

报告相同问题?

悬赏问题

  • ¥15 执行 virtuoso 命令后,界面没有,cadence 启动不起来
  • ¥50 comfyui下连接animatediff节点生成视频质量非常差的原因
  • ¥20 有关区间dp的问题求解
  • ¥15 多电路系统共用电源的串扰问题
  • ¥15 slam rangenet++配置
  • ¥15 有没有研究水声通信方面的帮我改俩matlab代码
  • ¥15 ubuntu子系统密码忘记
  • ¥15 保护模式-系统加载-段寄存器
  • ¥15 电脑桌面设定一个区域禁止鼠标操作
  • ¥15 求NPF226060磁芯的详细资料