CNN算法中怎么使用自调节学习速率

``` ........ #---------------------------网络结束--------------------------- loss=tf.losses.sparse_softmax_cross_entropy(labels=y_,logits=logits) train_op=tf.train.AdamOptimizer(learning_rate=0.05).minimize(loss) correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.int32), y_) acc= tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) ...... for epoch in range(n_epoch): ...... #training train_loss, train_acc, n_batch = 0, 0, 0 for x_train_a, y_train_a in minibatches(x_train, y_train, batch_size, shuffle=True): _,err,ac=sess.run([train_op,loss,acc], feed_dict={x: x_train_a, y_: y_train_a}) train_loss += err; train_acc += ac; n_batch += 1 print(" train loss: %f" % (train_loss/ n_batch)) print(" train acc: %f" % (train_acc/ n_batch)) ...... ``` 怎么能在每次循环里用 初始学习速率/循环次数 （0.05/epoch） 作为当前学习速率，以让下降速率递减

1.建立了一个3个全连接层的神经网络； 2.代码如下： ``` import matplotlib as mpl import matplotlib.pyplot as plt #%matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time import tensorflow as tf from tensorflow import keras print(tf.__version__) print(sys.version_info) for module in mpl, np, sklearn, tf, keras: print(module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data() x_valid, x_train = x_train_all[:5000], x_train_all[5000:] y_valid, y_train = y_train_all[:5000], y_train_all[5000:] #tf.keras.models.Sequential model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape= [28,28])) model.add(keras.layers.Dense(300, activation="relu")) model.add(keras.layers.Dense(100, activation="relu")) model.add(keras.layers.Dense(10,activation="softmax")) ###sparse为最后输出为index类型，如果为one hot类型，则不需加sparse model.compile(loss = "sparse_categorical_crossentropy",optimizer = "sgd", metrics = ["accuracy"]) #model.layers #model.summary() history = model.fit(x_train, y_train, epochs=10, validation_data=(x_valid,y_valid)) ``` 3.输出结果： ``` runfile('F:/new/new world/deep learning/tensorflow/ex2/tf_keras_classification_model.py', wdir='F:/new/new world/deep learning/tensorflow/ex2') 2.0.0 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0) matplotlib 3.1.1 numpy 1.16.5 sklearn 0.21.3 tensorflow 2.0.0 tensorflow_core.keras 2.2.4-tf Train on 55000 samples, validate on 5000 samples Epoch 1/10 WARNING:tensorflow:Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: 55000/55000 [==============================] - 3s 58us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 2/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 3/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 4/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 5/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 6/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 7/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 8/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 9/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 10/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 ```

tensorflow.python.framework.errors_impl.InvalidArgumentError

tensorflow.python.framework.errors_impl.InvalidArgumentError: Key: image/encoded. Can't parse serialized Example. [[Node: ParseSingleExample/ParseSingleExample = ParseSingleExample[Tdense=[DT_STRING, DT_INT64], dense_keys=["image/encoded", "image/label"], dense_shapes=[[8], []], num_sparse=0, sparse_keys=[], sparse_types=[]](arg0, ParseSingleExample/Const, ParseSingleExample/Const_1)]] [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,8,?,?,3], [?]], output_types=[DT_FLOAT, DT_INT64], _device="/job:localhost/replica:0/task:0/device:CPU:0"](Iterator)]] 请问这是什么原因啊？是我的数据集不对吗？

Tensorflow 多GPU并行训练，模型收敛su'du'ma

'Datasets' object has no attribute 'train_step'

import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data import mnist_forward import os BATAH_SIZE = 200 LEARNING_RATE_BASE = 0.1 LEARNING_RATE_DECAY = 0.99 REGULARIZER = 0.0001 STEPS = 50000 MOVING_AVERAGE_DECAY = 0.99 MODEL_SAVE_PATH = "./model/" MODEL_NAME = "mnist_model" def backward(mnist): x = tf.placeholder(tf.float32, [None, mnist_forward.INPUT_NODE]) y_ = tf.placeholder(tf.float32, [None, mnist_forward.OUTPUT_NODE]) y = mnist_forward.forward(x, REGULARIZER) global_step = tf.Variable(0, trainable=False) ce = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.arg_max(y_, 1)) cem = tf.reduce_mean(ce) loss = cem + tf.add_n(tf.get_collection('losses')) learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step, mnist.train.num_examples / BATAH_SIZE, LEARNING_RATE_DECAY, staircase=True) train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) ema = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) ema_op = ema.apply(tf.trainable_variables()) with tf.control_dependencies([train_step, ema_op]): train_op = tf.no_op(name='train') saver = tf.train.Saver() with tf.Session() as sess: init_op = tf.global_variables_initializer() sess.run(init_op) for i in range(STEPS): xs, ys = mnist.train_step.next_batch(BATAH_SIZE) _, loss_value, step = sess.run([train_op, loss, global_step], feed_dict={x: xs, y_: ys}) if i % 1000 == 0: print("After %d training step(s), loss on training batch is %g." % (step, loss_value)) saver.save(sess, os.path.join(MODEL_SAVE_PATH, MODEL_NAME), global_step=global_step) def main(): mnist = input_data.read_data_sets("./data/", one_hot=True) backward(mnist) if __name__ == '__main__': main() 运行程序后报错： File "C:/Users/98382/PycharmProjects/minst/mnist_backward.py", line 54, in <module> main() File "C:/Users/98382/PycharmProjects/minst/mnist_backward.py", line 51, in main backward(mnist) File "C:/Users/98382/PycharmProjects/minst/mnist_backward.py", line 43, in backward xs, ys = mnist.train_step.next_batch(BATAH_SIZE) AttributeError: 'Datasets' object has no attribute 'train_step'

``` def train(mnist): x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input') y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input') weights1 = tf.Variable( tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1)) biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE])) weights2 = tf.Variable( tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1)) biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE])) #计算向前传播y的输出值,此处首次向前传播计算，不必使用滑动平均值来进行权值优化，固填为None y = inference(x, None, weights1, biases1, weights2, biases2) #存储训练轮数，变量为不可训练的变量 global_step = tf.Variable(0, trainable=False) #给定滑动平均衰减率和训练轮数的变量，初始化滑动平均类，给定训练轮数可以加快早期 #变量训练速度 variable_averages = tf.train.ExponentialMovingAverage( MOVING_AVERAGE_DECAY, global_step) #创建指数移动平均类 #在所有代表神经网络参数的变量上使用滑动平均，这个集合元素是所有没有指定 #trainable = False 的参数 variable_averages_op = variable_averages.apply( tf.trainable_variables()) #将上类作用于当前所有可训练变量 #计算使用滑动平均的Y值，这里调用之前定义的inference函数，获取参数 average_y = inference(x, variable_averages, weights1, biases1, weights2, biases2) #计算交叉熵，作为刻画真实值y与预测值y_之间的差距 cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits( logits=y, labels=tf.argmax(y_, 1)) #计算当前BATCH中所有样例的交叉熵平均值 corss_entropy_mean = tf.reduce_mean(cross_entropy) #计算L2正则化损失函数 regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE) #计算模型正则化损失 regularization = regularizer(weights1) + regularizer(weights2) #总损失为交叉熵与正则化的和 loss = cross_entropy_mean + regularization #衰减学习率 global learning_rate learning_rate = tf.train.exponential_decay( LEARNING_RATE_BASE, global_step, mnist.train.num_examples / BATCH_SIZE, LEARNING_RATE_DECAY) #最小化loss train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step) with tf.control_dependencies([train_step, Variables_averages_op]): train_op = tf.no_op(name='train') ``` 可以看到在前面的train函数中定义了learning_rate局部变量，但是在外部调用时出现变量未定义的报错，我定义全局变量也没有用![图片说明](https://img-ask.csdn.net/upload/201807/02/1530499378_484353.png)

Linux+pytorch下运行报错RuntimeError: PyTorch was compiled without NumPy support

tensorflow简单的手写数字识别矩阵相乘时出现问题

TensorFlow的Keras如何使用Dataset作为数据输入？

ValueError: invalid literal for int() with base 10: 'aer'

#coding=utf-8 #Version:python3.6.0 #Tools:Pycharm 2017.3.2 import numpy as np import tensorflow as tf import re TRAIN_PATH="data/ptb.train.txt" EVAL_PATH="data/ptb.valid.txt" TEST_PATH="data/ptb.test.txt" HIDDEN_SIZE=300 NUM_LAYERS=2 VOCAB_SIZE=10000 TRAIN_BATCH_SIZE=20 TRAIN_NUM_STEP=35 EVAL_BATCH_SIZE=1 EVAL_NUM_STEP=1 NUM_EPOCH=5 LSTM_KEEP_PROB=0.9 EMBEDDING_KEEP_PROB=0.9 MAX_GRED_NORM=5 SHARE_EMB_AND_SOFTMAX=True class PTBModel(object): def __init__(self,is_training,batch_size,num_steps): self.batch_size=batch_size self.num_steps=num_steps self.input_data=tf.placeholder(tf.int32,[batch_size,num_steps]) self.targets=tf.placeholder(tf.int32,[batch_size,num_steps]) dropout_keep_prob=LSTM_KEEP_PROB if is_training else 1.0 lstm_cells=[ tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.BasicLSTMCell(HIDDEN_SIZE), output_keep_prob=dropout_keep_prob) for _ in range (NUM_LAYERS)] cell=tf.nn.rnn_cell.MultiRNNCell(lstm_cells) self.initial_state=cell.zero_state(batch_size,tf.float32) embedding=tf.get_variable("embedding",[VOCAB_SIZE,HIDDEN_SIZE]) inputs=tf.nn.embedding_lookup(embedding,self.input_data) if is_training: inputs=tf.nn.dropout(inputs,EMBEDDING_KEEP_PROB) outputs=[] state=self.initial_state with tf.variable_scope("RNN"): for time_step in range(num_steps): if time_step>0:tf.get_variable_scope().reuse_variables() cell_output,state=cell(inputs[:,time_step,:],state) outputs.append(cell_output) # 把输出队列展开成[batch,hidden_size*num_steps]的形状，然后再reshape成[batch*numsteps,hidden_size]的形状 output=tf.reshape(tf.concat(outputs,1),[-1,HIDDEN_SIZE]) if SHARE_EMB_AND_SOFTMAX: weight=tf.transpose(embedding) else: weight=tf.get_variable("weight",[HIDDEN_SIZE,VOCAB_SIZE]) bias=tf.get_variable("bias",[VOCAB_SIZE]) logits=tf.matmul(output,weight)+bias loss=tf.nn.sparse_softmax_cross_entropy_with_logits( labels=tf.reshape(self.targets,[-1]), logits=logits ) self.cost=tf.reduce_sum(loss)/batch_size self.final_state=state # 只在训练模型时定义反向传播操作 if not is_training:return trainable_variables=tf.trainable_variables() #控制梯度大小 grads,_=tf.clip_by_global_norm( tf.gradients(self.cost,trainable_variables),MAX_GRED_NORM) # 定义优化方法 optimizer=tf.train.GradientDescentOptimizer(learning_rate=1.0) # zip() 函数用于将可迭代的对象作为参数，将对象中对应的元素打包成一个个元组，然后返回由这些元组组成的对象，这样做的好处是节约了不少的内存。 #定义训练步骤 self.train_op=optimizer.apply_gradients( zip(grads,trainable_variables)) def run_epoch(session,model,batches,train_op,output_log,step): total_costs=0.0 iters=0 state=session.run(model.initial_state) for x,y in batches: cost,state,_=session.run( [model.cost,model.final_state,train_op], {model.input_data:x,model.targets:y, model.initial_state:state} ) total_costs+=cost iters+=model.num_steps # 只有在训练时输出日志 if output_log and step %100==0: print("After %d steps,perplexity is %.3f"%( step,np.exp(total_costs/iters) )) step +=1 return step,np.exp(total_costs/iters) # 从文件中读取数据，并返回包含单词编号的数组 def read_data(file_path): with open(file_path,"r") as fin: id_string=" ".join([line.strip() for line in fin.readlines()]) id_list=[int(w) for w in id_string.split()] # 将读取的单词编号转为整数 return id_list def make_batches(id_list,batch_size,num_step): # 计算总的batch数量，每个batch包含的单词数量是batch_size*num_step try: num_batches=(len(id_list)-1)/(batch_size*num_step) data=np.array(id_list[:num_batches*batch_size*num_step]) data=np.reshape(data,[batch_size,num_batches*num_step]) data_batches=np.split(data,num_batches,axis=1) label=np.array(id_list[1:num_batches*batch_size*num_step+1]) label=np.reshape(label,[batch_size,num_batches*num_step]) label_batches=np.split(label,num_batches,axis=1) return list(zip(data_batches,label_batches)) def main(): # 定义初始化函数 intializer=tf.random_uniform_initializer(-0.05,0.05) with tf.variable_scope("language_model",reuse=None,initializer=intializer): train_model=PTBModel(True,TRAIN_BATCH_SIZE,TRAIN_NUM_STEP) with tf.variable_scope("language_model",reuse=True,initializer=intializer): eval_model=PTBModel(False,EVAL_BATCH_SIZE,EVAL_NUM_STEP) with tf.Session() as session: tf.global_variables_initializer().run() train_batches=make_batches(read_data(TRAIN_PATH),TRAIN_BATCH_SIZE,TRAIN_NUM_STEP) eval_batches=make_batches(read_data(EVAL_PATH),EVAL_BATCH_SIZE,EVAL_NUM_STEP) test_batches=make_batches(read_data(TEST_PATH),EVAL_BATCH_SIZE,EVAL_NUM_STEP) step=0 for i in range(NUM_EPOCH): print("In iteration:%d" % (i+1)) step,train_pplx=run_epoch(session,train_model,train_batches,train_model.train_op,True,step) print("Epoch:%d Train perplexity:%.3f"%(i+1,train_pplx)) _,eval_pplx=run_epoch(session,eval_model,eval_batches,tf.no_op,False,0) print("Epoch:%d Eval perplexity:%.3f"%(i+1,eval_pplx)) _,test_pplx=run_epoch(session,eval_model,test_batches,tf.no_op(),False,0) print("Test perplexity:%.3f"% test_pplx) if __name__ == '__main__': main()

tensorflow训练完模型直接测试和导入模型进行测试的结果不同，一个很好，一个略差，这是为什么？

TypeError: 'FileWriter' object is not callable

MySQL数据库面试题（2020最新版）

C语言的灵魂之指针

10个提升效率的编程好习惯

HashMap底层实现原理，红黑树，B+树，B树的结构原理 Spring的AOP和IOC是什么？它们常见的使用场景有哪些？Spring事务，事务的属性，传播行为，数据库隔离级别 Spring和SpringMVC，MyBatis以及SpringBoot的注解分别有哪些？SpringMVC的工作原理，SpringBoot框架的优点，MyBatis框架的优点 SpringCould组件有哪些，他们...

《Java经典编程365例》000：学妹的优秀成绩单

Python爬虫，高清美图我全都要（彼岸桌面壁纸）

6年开发经验女程序员，面试京东Java岗要求薪资28K

Java岗开发3年，公司临时抽查算法，离职后这几题我记一辈子