tensorflow 中怎么查看训练好的模型的参数呢？

2个回答

tensorflow如何在训练一定迭代次数后停止对某个参数的训练，而继续对其他参数进行训练？

tensorflow重载模型继续训练得到的loss比原模型继续训练得到的loss大，是什么原因？？

tensorflow 怎么预训练 微调自己的数据

tensorflow训练完模型直接测试和导入模型进行测试的结果不同，一个很好，一个略差，这是为什么？

tensorflow CNN训练图片分类的时候，模型训练不出来，准确率0.1（分了十类），模型失效，是什么原因？

``` def compute_accuracy(v_xs, v_ys): global prediction y_pre = sess.run(prediction, feed_dict={xs: v_xs, keep_prob: 1}) correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) result = sess.run(accuracy, feed_dict={xs: v_xs, ys: v_ys, keep_prob: 1}) return result def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) def conv2d(x, W): # stride [1, x_movement, y_movement, 1] # Must have strides[0] = strides[3] = 1 return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): # stride [1, x_movement, y_movement, 1] return tf.nn.max_pool(x, ksize=[1,4,4,1], strides=[1,4,4,1], padding='SAME') # define placeholder for inputs to network xs = tf.placeholder(tf.float32, [None, 65536])/255. # 256x256 ys = tf.placeholder(tf.float32, [None, 10]) keep_prob = tf.placeholder(tf.float32) x_image = tf.reshape(xs, [-1, 256, 256, 1]) # print(x_image.shape) # [n_samples, 28,28,1] ## conv1 layer ## W_conv1 = weight_variable([3,3, 1,64]) # patch 5x5, in size 1, out size 32 b_conv1 = bias_variable([64]) h_conv1 = tf.nn.elu(conv2d(x_image, W_conv1) + b_conv1) # output size 28x28x32 h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') # output size 14x14x32 ## conv2 layer ## W_conv2 = weight_variable([3,3, 64, 128]) # patch 5x5, in size 32, out size 64 b_conv2 = bias_variable([128]) h_conv2 = tf.nn.elu(conv2d(h_pool1, W_conv2) + b_conv2) # output size 14x14x64 h_pool2 = max_pool_2x2(h_conv2) # output size 7x7x64 ## conv3 layer ## W_conv3 = weight_variable([3,3, 128, 256]) # patch 5x5, in size 32, out size 64 b_conv3 = bias_variable([256]) h_conv3 = tf.nn.elu(conv2d(h_pool2, W_conv3) + b_conv3) # output size 14x14x64 h_pool3 = max_pool_2x2(h_conv3) ## conv4 layer ## W_conv4 = weight_variable([3,3, 256, 512]) # patch 5x5, in size 32, out size 64 b_conv4 = bias_variable([512]) h_conv4 = tf.nn.elu(conv2d(h_pool3, W_conv4) + b_conv4) # output size 14x14x64 h_pool4 = max_pool_2x2(h_conv4) # ## conv5 layer ## # W_conv5 = weight_variable([3,3, 512, 512]) # patch 5x5, in size 32, out size 64 # b_conv5 = bias_variable([512]) # h_conv5 = tf.nn.relu(conv2d(h_pool3, W_conv4) + b_conv4) # output size 14x14x64 # h_pool5 = max_pool_2x2(h_conv4) ## fc1 layer ## W_fc1 = weight_variable([2*2*512, 128]) b_fc1 = bias_variable([128]) # [n_samples, 7, 7, 64] ->> [n_samples, 7*7*64] h_pool4_flat = tf.reshape(h_pool4, [-1, 2*2*512]) h_fc1 = tf.nn.elu(tf.matmul(h_pool4_flat, W_fc1) + b_fc1) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) ## fc2 layer ## W_fc2 = weight_variable([128, 10]) b_fc2 = bias_variable([10]) prediction = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) # 定义优化器和训练op loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=ys, logits=prediction)) train_step = tf.train.RMSPropOptimizer((1e-3)).minimize(loss) correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(ys, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # 用于保存和载入模型 saver = tf.train.Saver() def int2onehot(train_batch_ys): num_labels = train_batch_ys.shape[0] num_classes=10 index_offset = np.arange(num_labels) * num_classes labels_one_hot = np.zeros((num_labels, num_classes),dtype=np.float32) labels_one_hot.flat[index_offset + train_batch_ys.ravel()] = 1 return labels_one_hot train_label_lists, train_data_lists, train_fname_lists = read_tfrecords(train_tfrecord_file) iterations = 100 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # 执行训练迭代 for it in range(iterations): # 这里的关键是要把输入数组转为np.array for i in range(200): train_label_list = train_label_lists[i] train_data_list= train_data_lists[i] train_name_list = train_fname_lists[i] #print("shape of train_data_list: {}\tshape of train_label_list: {}".format(train_data_list.shape, train_label_list.shape)) #print('该批文件名：',train_name_list) print('该批标签：',train_label_list) # 计算有多少类图片 #num_classes = len(set(train_label_list)) #print("num_classes:",num_classes) train_batch_xs = train_data_list train_batch_xs = np.reshape(train_batch_xs, (-1, 65536)) train_batch_ys = train_label_list train_batch_ys = int2onehot(train_batch_ys) #print('第'+str(i)+'批-----------') print("连接层1之后----------------------------------------") for i in range(80): print("元素"+str(i)+"：",sess.run(tf.reduce_mean(sess.run(h_fc1_drop,feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5})[i].shape))) print("元素"+str(i)+"：",sess.run(h_fc1_drop,feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5})[i]) print("连接层2之后----------------------------------------") for i in range(80): print("元素"+str(i)+"：",sess.run(tf.reduce_mean(sess.run(prediction,feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5})[i].shape))) print("元素"+str(i)+"：",sess.run(prediction,feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5})[i]) #loss.run(feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5}) train_step.run(feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5}) time.sleep(7) # 每完成五次迭代，判断准确度是否已达到100%，达到则退出迭代循环 iterate_accuracy = 0 if it%5 == 0: iterate_accuracy = accuracy.eval(feed_dict={xs: train_batch_xs, ys: train_batch_ys, keep_prob: 0.5}) print ('iteration %d: accuracy %s' % (it, iterate_accuracy)) if iterate_accuracy >= 1: break; print ('完成训练!') ```

C++与Python训练出来的TensorFlow或者Caffe 模型的文件是一样的吗？

tensorflow, 将模型抽象到一个函数中,两次调用是一样的计算图吗？

tensorflow训练过程权重不更新，loss不下降，输出保持不变，只有bias在非常缓慢地变化？

tensorflow中相同的代码，运行多次每次的结果都不相同？

tensorflow框架训练好的神经网络模型，加载之后再去测试准确率特别低 图中是我的加载方法 麻烦大神帮忙指正，是不是网络加载出现问题 首先手动重新构建了模型：以下代码省略了权值、偏置和网络搭建 ``` # 构建模型 pred = alex_net(x, weights, biases, keep_prob) # 定义损失函数和优化器 cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y,logits=pred))#softmax和交叉熵结合 optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # 评估函数 correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # 3.训练模型和评估模型 # 初始化变量 init = tf.global_variables_initializer() saver = tf.train.Saver() with tf.Session() as sess: # 初始化变量 sess.run(init) saver.restore(sess, tf.train.latest_checkpoint(model_dir)) pred_test=sess.run(pred,{x:test_x, keep_prob:1.0}) result=sess.run(tf.argmax(pred_test, 1)) ```

Tensorflow Object Detection API使用，不训练可以修改pipeline.config文件吗？

Java校招入职华为，半年后我跑路了

@程序员：GitHub这个项目快薅羊毛

Linux系统的最常用命令你了解多少呢？

Linux系统的最常用命令第一组 用户管理类命令1.添加用户2.修改密码3.删除用户4.切换用户5.添加用户组6.删除用户组第二组 文件与目录类命令1.查看目录2.改变工作目录3.复制文件、目录4.移动或改名5.删除文件、目录6.改变文件权限7.创建目录8.新建文件9.查看目录大小10.查看当前路径第三组 压缩打包类命令压缩解压1.压缩解压--gzip2.压缩解压--bzip23.压缩解压--ta

loonggg读完需要3分钟速读仅需 1 分钟大家好，我是你们的校长。我之前讲过，这年头，只要肯动脑，肯行动，程序员凭借自己的技术，赚钱的方式还是有很多种的。仅仅靠在公司出卖自己的劳动时...

win10暴力查看wifi密码

MySQL数据库面试题（2020最新版）

2020阿里全球数学大赛：3万名高手、4道题、2天2夜未交卷

HTTP与HTTPS的区别

HashMap底层实现原理，红黑树，B+树，B树的结构原理 Spring的AOP和IOC是什么？它们常见的使用场景有哪些？Spring事务，事务的属性，传播行为，数据库隔离级别 Spring和SpringMVC，MyBatis以及SpringBoot的注解分别有哪些？SpringMVC的工作原理，SpringBoot框架的优点，MyBatis框架的优点 SpringCould组件有哪些，他们...