tf.contrib无提示 pycharm中的tensorflow部分模块没有自动补全,如何解决?

tf.contrib无提示 pycharm中的tensorflow部分模块没有自动补全,如何解决?
比如:tf.contrib.rnn.DropoutWrapper;后面的DropoutWrapper总是需要手动输入,很难受

1个回答

可以把

from tensorflow import contrib

改成

import tenforflow.contrib as ct

contrib后就会有提示

weixin_43301579
Kill Me~Heal Me 回复乌鸦爱宝笛: 2.0的话,用什么代替
3 个月之前 回复
weixin_43339264
weixin_43339264 回复随风而醒: 怎么解决的啊?
4 个月之前 回复
weixin_41658373
乌鸦爱宝笛 tensorflow 2.0没这些了
4 个月之前 回复
sinat_38230425
鼠二二 回复随风而醒: 哦哦,没注意 嘿嘿
7 个月之前 回复
SoundSlow
随风而醒 很早地问题了,已解决,谢谢哈-"."-
7 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
关于Colab上Keras模型转TPU模型的问题
使用TPU加速训练,将Keras模型转TPU模型时报错,如图![图片说明](https://img-ask.csdn.net/upload/202001/14/1578998736_238721.png) 关键代码如下 引用库: ``` %tensorflow_version 1.x import json import os import numpy as np import tensorflow as tf from tensorflow.python.keras.applications import resnet from tensorflow.python.keras import callbacks from tensorflow.python.keras.preprocessing.image import ImageDataGenerator import matplotlib.pyplot as plt ``` 转换TPU模型代码如下 ``` # This address identifies the TPU we'll use when configuring TensorFlow. TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR'] tf.logging.set_verbosity(tf.logging.INFO) self.model = tf.contrib.tpu.keras_to_tpu_model(self.model, strategy=tf.contrib.tpu.TPUDistributionStrategy(tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER))) self.model = resnet50.ResNet50(weights=None, input_shape=dataset.input_shape, classes=num_classes) ```
ImportError:cannot import name 'cloud' from 'tensorflow.contrib'求助
使用Tensorflow Object_Detection, 配好了Tensorflow 1.14.0 和 Protocobuf 3.10.0 然后路径也配好了,就是运行测试文件时会报错 ImportError:cannot import name 'cloud' from 'tensorflow.contrib' 请问各位大神,这是缺少了什么库?
tensorflow 中怎么查看训练好的模型的参数呢?
采用tensorflow中已有封装好的模块进行训练后(比如tf.contrib.layers.fully_connected),怎么查看训练好的模型的参数呢(比如某一层的权重/偏置都是什么)?求指教
Python的django.contrib.auth.model需要安装吗
![](https://img-ask.csdn.net/upload/201709/20/1505879562_916978.png) Python在写django blog 时的django.contrib.auth.model需要安装吗 原句:from django.contrib.auth.model import User
tensorflow报错 缺少模块问题
AttributeError: module 'tensorflow.contrib.layers' has no attribute 'group_norm' 求解决
python运行wechatsogou模块,报错
报错信息:ModuleNotFoundError: No module named 'werkzeug.contrib' 然后我检查了werkzeug,发现并没有contrib。我的werkzeug为最新版1.0.0的。望各位大佬帮我看看怎么办。谢谢。
tensorflow 报错You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3],但是怎么看数据都没错,请大神指点
调试googlenet的代码,总是报错 InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3],但是我怎么看喂的数据都没问题,请大神们指点 ``` # -*- coding: utf-8 -*- """ GoogleNet也被称为InceptionNet Created on Mon Feb 10 12:15:35 2020 @author: 月光下的云海 """ import tensorflow as tf from keras.datasets import cifar10 import numpy as np import tensorflow.contrib.slim as slim tf.reset_default_graph() tf.reset_default_graph() (x_train,y_train),(x_test,y_test) = cifar10.load_data() x_train = x_train.astype('float32') x_test = x_test.astype('float32') y_train = y_train.astype('int32') y_test = y_test.astype('int32') y_train = y_train.reshape(y_train.shape[0]) y_test = y_test.reshape(y_test.shape[0]) x_train = x_train/255 x_test = x_test/255 #************************************************ 构建inception ************************************************ #构建一个多分支的网络结构 #INPUTS: # d0_1:最左边的分支,分支0,大小为1*1的卷积核个数 # d1_1:左数第二个分支,分支1,大小为1*1的卷积核的个数 # d1_3:左数第二个分支,分支1,大小为3*3的卷积核的个数 # d2_1:左数第三个分支,分支2,大小为1*1的卷积核的个数 # d2_5:左数第三个分支,分支2,大小为5*5的卷积核的个数 # d3_1:左数第四个分支,分支3,大小为1*1的卷积核的个数 # scope:参数域名称 # reuse:是否重复使用 #*************************************************************************************************************** def inception(x,d0_1,d1_1,d1_3,d2_1,d2_5,d3_1,scope = 'inception',reuse = None): with tf.variable_scope(scope,reuse = reuse): #slim.conv2d,slim.max_pool2d的默认参数都放在了slim的参数域里面 with slim.arg_scope([slim.conv2d,slim.max_pool2d],stride = 1,padding = 'SAME'): #第一个分支 with tf.variable_scope('branch0'): branch_0 = slim.conv2d(x,d0_1,[1,1],scope = 'conv_1x1') #第二个分支 with tf.variable_scope('branch1'): branch_1 = slim.conv2d(x,d1_1,[1,1],scope = 'conv_1x1') branch_1 = slim.conv2d(branch_1,d1_3,[3,3],scope = 'conv_3x3') #第三个分支 with tf.variable_scope('branch2'): branch_2 = slim.conv2d(x,d2_1,[1,1],scope = 'conv_1x1') branch_2 = slim.conv2d(branch_2,d2_5,[5,5],scope = 'conv_5x5') #第四个分支 with tf.variable_scope('branch3'): branch_3 = slim.max_pool2d(x,[3,3],scope = 'max_pool') branch_3 = slim.conv2d(branch_3,d3_1,[1,1],scope = 'conv_1x1') #连接 net = tf.concat([branch_0,branch_1,branch_2,branch_3],axis = -1) return net #*************************************** 使用inception构建GoogleNet ********************************************* #使用inception构建GoogleNet #INPUTS: # inputs-----------输入 # num_classes------输出类别数目 # is_trainning-----batch_norm层是否使用训练模式,batch_norm和is_trainning密切相关 # 当is_trainning = True 时候,它使用一个batch数据的平均移动,方差值 # 当is_trainning = Flase时候,它就使用固定的值 # verbos-----------控制打印信息 # reuse------------是否重复使用 #*************************************************************************************************************** def googlenet(inputs,num_classes,reuse = None,is_trainning = None,verbose = False): with slim.arg_scope([slim.batch_norm],is_training = is_trainning): with slim.arg_scope([slim.conv2d,slim.max_pool2d,slim.avg_pool2d], padding = 'SAME',stride = 1): net = inputs #googlnet的第一个块 with tf.variable_scope('block1',reuse = reuse): net = slim.conv2d(net,64,[5,5],stride = 2,scope = 'conv_5x5') if verbose: print('block1 output:{}'.format(net.shape)) #googlenet的第二个块 with tf.variable_scope('block2',reuse = reuse): net = slim.conv2d(net,64,[1,1],scope = 'conv_1x1') net = slim.conv2d(net,192,[3,3],scope = 'conv_3x3') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block2 output:{}'.format(net.shape)) #googlenet第三个块 with tf.variable_scope('block3',reuse = reuse): net = inception(net,64,96,128,16,32,32,scope = 'inception_1') net = inception(net,128,128,192,32,96,64,scope = 'inception_2') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block3 output:{}'.format(net.shape)) #googlenet第四个块 with tf.variable_scope('block4',reuse = reuse): net = inception(net,192,96,208,16,48,64,scope = 'inception_1') net = inception(net,160,112,224,24,64,64,scope = 'inception_2') net = inception(net,128,128,256,24,64,64,scope = 'inception_3') net = inception(net,112,144,288,24,64,64,scope = 'inception_4') net = inception(net,256,160,320,32,128,128,scope = 'inception_5') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block4 output:{}'.format(net.shape)) #googlenet第五个块 with tf.variable_scope('block5',reuse = reuse): net = inception(net,256,160,320,32,128,128,scope = 'inception1') net = inception(net,384,182,384,48,128,128,scope = 'inception2') net = slim.avg_pool2d(net,[2,2],stride = 2,scope = 'avg_pool') if verbose: print('block5 output:{}'.format(net.shape)) #最后一块 with tf.variable_scope('classification',reuse = reuse): net = slim.flatten(net) net = slim.fully_connected(net,num_classes,activation_fn = None,normalizer_fn = None,scope = 'logit') if verbose: print('classification output:{}'.format(net.shape)) return net #给卷积层设置默认的激活函数和batch_norm with slim.arg_scope([slim.conv2d],activation_fn = tf.nn.relu,normalizer_fn = slim.batch_norm) as sc: conv_scope = sc is_trainning_ph = tf.placeholder(tf.bool,name = 'is_trainning') #定义占位符 x_train_ph = tf.placeholder(shape = (None,x_train.shape[1],x_train.shape[2],x_train.shape[3]),dtype = tf.float32) x_test_ph = tf.placeholder(shape = (None,x_test.shape[1],x_test.shape[2],x_test.shape[3]),dtype = tf.float32) y_train_ph = tf.placeholder(shape = (None,),dtype = tf.int32) y_test_ph = tf.placeholder(shape = (None,),dtype = tf.int32) #实例化网络 with slim.arg_scope(conv_scope): train_out = googlenet(x_train_ph,10,is_trainning = is_trainning_ph,verbose = True) val_out = googlenet(x_test_ph,10,is_trainning = is_trainning_ph,reuse = True) #定义loss和acc with tf.variable_scope('loss'): train_loss = tf.losses.sparse_softmax_cross_entropy(labels = y_train_ph,logits = train_out,scope = 'train') val_loss = tf.losses.sparse_softmax_cross_entropy(labels = y_test_ph,logits = val_out,scope = 'val') with tf.name_scope('accurcay'): train_acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(train_out,axis = -1,output_type = tf.int32),y_train_ph),tf.float32)) val_acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(val_out,axis = -1,output_type = tf.int32),y_test_ph),tf.float32)) #定义训练op lr = 1e-2 opt = tf.train.MomentumOptimizer(lr,momentum = 0.9) #通过tf.get_collection获得所有需要更新的op update_op = tf.get_collection(tf.GraphKeys.UPDATE_OPS) #使用tesorflow控制流,先执行update_op再进行loss最小化 with tf.control_dependencies(update_op): train_op = opt.minimize(train_loss) #开启会话 sess = tf.Session() saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) batch_size = 64 #开始训练 for e in range(10000): batch1 = np.random.randint(0,50000,size = batch_size) t_x_train = x_train[batch1][:][:][:] t_y_train = y_train[batch1] batch2 = np.random.randint(0,10000,size = batch_size) t_x_test = x_test[batch2][:][:][:] t_y_test = y_test[batch2] sess.run(train_op,feed_dict = {x_train_ph:t_x_train, is_trainning_ph:True, y_train_ph:t_y_train}) # if(e%1000 == 999): # loss_train,acc_train = sess.run([train_loss,train_acc], # feed_dict = {x_train_ph:t_x_train, # is_trainning_ph:True, # y_train_ph:t_y_train}) # loss_test,acc_test = sess.run([val_loss,val_acc], # feed_dict = {x_test_ph:t_x_test, # is_trainning_ph:False, # y_test_ph:t_y_test}) # print('STEP{}:train_loss:{:.6f} train_acc:{:.6f} test_loss:{:.6f} test_acc:{:.6f}' # .format(e+1,loss_train,acc_train,loss_test,acc_test)) saver.save(sess = sess,save_path = 'VGGModel\model.ckpt') print('Train Done!!') print('--'*60) sess.close() ``` 报错信息是 ``` Using TensorFlow backend. block1 output:(?, 16, 16, 64) block2 output:(?, 8, 8, 192) block3 output:(?, 4, 4, 480) block4 output:(?, 2, 2, 832) block5 output:(?, 1, 1, 1024) classification output:(?, 10) Traceback (most recent call last): File "<ipython-input-1-6385a760fe16>", line 1, in <module> runfile('F:/Project/TEMP/LearnTF/GoogleNet/GoogleNet.py', wdir='F:/Project/TEMP/LearnTF/GoogleNet') File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "F:/Project/TEMP/LearnTF/GoogleNet/GoogleNet.py", line 177, in <module> y_train_ph:t_y_train}) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 900, in run run_metadata_ptr) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1316, in _do_run run_metadata) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1335, in _do_call raise type(e)(node_def, op, message) InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3] [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[?,32,32,3], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]] [[Node: gradients/block4/inception_4/concat_grad/ShapeN/_45 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_23694_gradients/block4/inception_4/concat_grad/ShapeN", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] ``` 看了好多遍都不是喂数据的问题,百度说是summary出了问题,可是我也没有summary呀,头晕~~~~
tensorflow训练完模型直接测试和导入模型进行测试的结果不同,一个很好,一个略差,这是为什么?
在tensorflow训练完模型,我直接采用同一个session进行测试,得到结果较好,但是采用训练完保存的模型,进行重新载入进行测试,结果较差,不懂是为什么会出现这样的结果。注:测试数据是一样的。以下是模型结果: 训练集:loss:0.384,acc:0.931. 验证集:loss:0.212,acc:0.968. 训练完在同一session内的测试集:acc:0.96。导入保存的模型进行测试:acc:0.29 ``` def create_model(hps): global_step = tf.Variable(tf.zeros([], tf.float64), name = 'global_step', trainable = False) scale = 1.0 / math.sqrt(hps.num_embedding_size + hps.num_lstm_nodes[-1]) / 3.0 print(type(scale)) gru_init = tf.random_normal_initializer(-scale, scale) with tf.variable_scope('Bi_GRU_nn', initializer = gru_init): for i in range(hps.num_lstm_layers): cell_bw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-bw') cell_bw = tf.contrib.rnn.DropoutWrapper(cell_bw, output_keep_prob = dropout_keep_prob) cell_fw = tf.contrib.rnn.GRUCell(hps.num_lstm_nodes[i], activation = tf.nn.relu, name = 'cell-fw') cell_fw = tf.contrib.rnn.DropoutWrapper(cell_fw, output_keep_prob = dropout_keep_prob) rnn_outputs, _ = tf.nn.bidirectional_dynamic_rnn(cell_bw, cell_fw, inputs, dtype=tf.float32) embeddedWords = tf.concat(rnn_outputs, 2) finalOutput = embeddedWords[:, -1, :] outputSize = hps.num_lstm_nodes[-1] * 2 # 因为是双向LSTM,最终的输出值是fw和bw的拼接,因此要乘以2 last = tf.reshape(finalOutput, [-1, outputSize]) # reshape成全连接层的输入维度 last = tf.layers.batch_normalization(last, training = is_training) fc_init = tf.uniform_unit_scaling_initializer(factor = 1.0) with tf.variable_scope('fc', initializer = fc_init): fc1 = tf.layers.dense(last, hps.num_fc_nodes, name = 'fc1') fc1_batch_normalization = tf.layers.batch_normalization(fc1, training = is_training) fc_activation = tf.nn.relu(fc1_batch_normalization) logits = tf.layers.dense(fc_activation, hps.num_classes, name = 'fc2') with tf.name_scope('metrics'): softmax_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits = logits, labels = tf.argmax(outputs, 1)) loss = tf.reduce_mean(softmax_loss) # [0, 1, 5, 4, 2] ->argmax:2 因为在第二个位置上是最大的 y_pred = tf.argmax(tf.nn.softmax(logits), 1, output_type = tf.int64, name = 'y_pred') # 计算准确率,看看算对多少个 correct_pred = tf.equal(tf.argmax(outputs, 1), y_pred) # tf.cast 将数据转换成 tf.float32 类型 accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) with tf.name_scope('train_op'): tvar = tf.trainable_variables() for var in tvar: print('variable name: %s' % (var.name)) grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvar), hps.clip_lstm_grads) optimizer = tf.train.AdamOptimizer(hps.learning_rate) train_op = optimizer.apply_gradients(zip(grads, tvar), global_step) # return((inputs, outputs, is_training), (loss, accuracy, y_pred), (train_op, global_step)) return((inputs, outputs), (loss, accuracy, y_pred), (train_op, global_step)) placeholders, metrics, others = create_model(hps) content, labels = placeholders loss, accuracy, y_pred = metrics train_op, global_step = others def val_steps(sess, x_batch, y_batch, writer = None): loss_val, accuracy_val = sess.run([loss,accuracy], feed_dict = {inputs: x_batch, outputs: y_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) return loss_val, accuracy_val loss_summary = tf.summary.scalar('loss', loss) accuracy_summary = tf.summary.scalar('accuracy', accuracy) # 将所有的变量都集合起来 merged_summary = tf.summary.merge_all() # 用于test测试的summary merged_summary_test = tf.summary.merge([loss_summary, accuracy_summary]) LOG_DIR = '.' run_label = 'run_Bi-GRU_Dropout_tensorboard' run_dir = os.path.join(LOG_DIR, run_label) if not os.path.exists(run_dir): os.makedirs(run_dir) train_log_dir = os.path.join(run_dir, timestamp, 'train') test_los_dir = os.path.join(run_dir, timestamp, 'test') if not os.path.exists(train_log_dir): os.makedirs(train_log_dir) if not os.path.join(test_los_dir): os.makedirs(test_los_dir) # saver得到的文件句柄,可以将文件训练的快照保存到文件夹中去 saver = tf.train.Saver(tf.global_variables(), max_to_keep = 5) # train 代码 init_op = tf.global_variables_initializer() train_keep_prob_value = 0.2 test_keep_prob_value = 1.0 # 由于如果按照每一步都去计算的话,会很慢,所以我们规定每100次存储一次 output_summary_every_steps = 100 num_train_steps = 1000 # 每隔多少次保存一次 output_model_every_steps = 500 # 测试集测试 test_model_all_steps = 4000 i = 0 session_conf = tf.ConfigProto( gpu_options = tf.GPUOptions(allow_growth=True), allow_soft_placement = True, log_device_placement = False) with tf.Session(config = session_conf) as sess: sess.run(init_op) # 将训练过程中,将loss,accuracy写入文件里,后面是目录和计算图,如果想要在tensorboard中显示计算图,就想sess.graph加上 train_writer = tf.summary.FileWriter(train_log_dir, sess.graph) # 同样将测试的结果保存到tensorboard中,没有计算图 test_writer = tf.summary.FileWriter(test_los_dir) batches = batch_iter(list(zip(x_train, y_train)), hps.batch_size, hps.num_epochs) for batch in batches: train_x, train_y = zip(*batch) eval_ops = [loss, accuracy, train_op, global_step] should_out_summary = ((i + 1) % output_summary_every_steps == 0) if should_out_summary: eval_ops.append(merged_summary) # 那三个占位符输进去 # 计算loss, accuracy, train_op, global_step的图 eval_ops.append(merged_summary) outputs_train = sess.run(eval_ops, feed_dict={ inputs: train_x, outputs: train_y, dropout_keep_prob: train_keep_prob_value, is_training: hps.train_is_training }) loss_train, accuracy_train = outputs_train[0:2] if should_out_summary: # 由于我们想在100steps之后计算summary,所以上面 should_out_summary = ((i + 1) % output_summary_every_steps == 0)成立, # 即为真True,那么我们将训练的内容放入eval_ops的最后面了,因此,我们想获得summary的结果得在eval_ops_results的最后一个 train_summary_str = outputs_train[-1] # 将获得的结果写训练tensorboard文件夹中,由于训练从0开始,所以这里加上1,表示第几步的训练 train_writer.add_summary(train_summary_str, i + 1) test_summary_str = sess.run([merged_summary_test], feed_dict = {inputs: x_dev, outputs: y_dev, dropout_keep_prob: 1.0, is_training: hps.val_is_training })[0] test_writer.add_summary(test_summary_str, i + 1) current_step = tf.train.global_step(sess, global_step) if (i + 1) % 100 == 0: print("Step: %5d, loss: %3.3f, accuracy: %3.3f" % (i + 1, loss_train, accuracy_train)) # 500个batch校验一次 if (i + 1) % 500 == 0: loss_eval, accuracy_eval = val_steps(sess, x_dev, y_dev) print("Step: %5d, val_loss: %3.3f, val_accuracy: %3.3f" % (i + 1, loss_eval, accuracy_eval)) if (i + 1) % output_model_every_steps == 0: path = saver.save(sess,os.path.join(out_dir, 'ckp-%05d' % (i + 1))) print("Saved model checkpoint to {}\n".format(path)) print('model saved to ckp-%05d' % (i + 1)) if (i + 1) % test_model_all_steps == 0: # test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, dropout_keep_prob: 1.0}) test_loss, test_acc, all_predictions= sess.run([loss, accuracy, y_pred], feed_dict = {inputs: x_test, outputs: y_test, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) print("test_loss: %3.3f, test_acc: %3.3d" % (test_loss, test_acc)) batches = batch_iter(list(x_test), 128, 1, shuffle=False) # Collect the predictions here all_predictions = [] for x_test_batch in batches: batch_predictions = sess.run(y_pred, {inputs: x_test_batch, is_training: hps.val_is_training, dropout_keep_prob: 1.0}) all_predictions = np.concatenate([all_predictions, batch_predictions]) correct_predictions = float(sum(all_predictions == y.flatten())) print("Total number of test examples: {}".format(len(y_test))) print("Accuracy: {:g}".format(correct_predictions/float(len(y_test)))) test_y = y_test.argmax(axis = 1) #生成混淆矩阵 conf_mat = confusion_matrix(test_y, all_predictions) fig, ax = plt.subplots(figsize = (4,2)) sns.heatmap(conf_mat, annot=True, fmt = 'd', xticklabels = cat_id_df.category_id.values, yticklabels = cat_id_df.category_id.values) font_set = FontProperties(fname = r"/usr/share/fonts/truetype/wqy/wqy-microhei.ttc", size=15) plt.ylabel(u'实际结果',fontsize = 18,fontproperties = font_set) plt.xlabel(u'预测结果',fontsize = 18,fontproperties = font_set) plt.savefig('./test.png') print('accuracy %s' % accuracy_score(all_predictions, test_y)) print(classification_report(test_y, all_predictions,target_names = cat_id_df['category_name'].values)) print(classification_report(test_y, all_predictions)) i += 1 ``` 以上的模型代码,请求各位大神帮我看看,为什么出现这样的结果?
pycharm Django setting.py文件配置问题
INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # 'app1.apps.App1Config', 'app1', ] 注:最后一行本没有,我自己加上的。本来有加注释的这行(让我注释掉了)。 以上代码,我看别的教程是没有我加注释这行的,而我自己用pycharm建的Django项目 有加注释的这行。所有教程都说让我加上最后一行,而没有提注释这行,新手表示注释这行很奇怪啊,请问大神这行是干什么的??
Tensorflow代码转到Keras
我现在有Tensortflow的代码和结构图如下,这是AC-GAN中生成器的部分,我用原生tf是可以跑通的,但当我想转到Keras中实现却很头疼。 ``` def batch_norm(inputs, is_training=is_training, decay=0.9): return tf.contrib.layers.batch_norm(inputs, is_training=is_training, decay=decay) # 构建残差块 def g_block(inputs): h0 = tf.nn.relu(batch_norm(conv2d(inputs, 3, 64, 1, use_bias=False))) h0 = batch_norm(conv2d(h0, 3, 64, 1, use_bias=False)) h0 = tf.add(h0, inputs) return h0 # 生成器 # batch_size = 32 # z : shape(32, 128) # label : shape(32, 34) def generator(z, label): with tf.variable_scope('generator', reuse=None): d = 16 z = tf.concat([z, label], axis=1) h0 = tf.layers.dense(z, units=d * d * 64) h0 = tf.reshape(h0, shape=[-1, d, d, 64]) h0 = tf.nn.relu(batch_norm(h0)) shortcut = h0 for i in range(16): h0 = g_block(h0) h0 = tf.nn.relu(batch_norm(h0)) h0 = tf.add(h0, shortcut) for i in range(3): h0 = conv2d(h0, 3, 256, 1, use_bias=False) h0 = tf.depth_to_space(h0, 2) h0 = tf.nn.relu(batch_norm(h0)) h0 = tf.layers.conv2d(h0, kernel_size=9, filters=3, strides=1, padding='same', activation=tf.nn.tanh, name='g', use_bias=True) return h0 ``` ![生成器结构图](https://img-ask.csdn.net/upload/201910/29/1572278934_997142.png) 在Keras中都是先构建Model,在Model中不断的加层 但上面的代码却是中间包含着新旧数据的计算,比如 ``` .... shortcut = h0 .... h0 = tf.add(h0, shortcut) ``` 难不成我还要构建另外一个model作为中间输出吗? 大佬们帮帮忙解释下,或者能不能给出翻译到Keras中应该怎么写
tensorflow模型推理,两个列表串行,输出结果是第一个列表的循环,新手求教
tensorflow模型推理,两个列表串行,输出结果是第一个列表的循环,新手求教 ``` from __future__ import print_function import argparse from datetime import datetime import os import sys import time import scipy.misc import scipy.io as sio import cv2 from glob import glob import multiprocessing os.environ["CUDA_VISIBLE_DEVICES"] = "0" import tensorflow as tf import numpy as np from PIL import Image from utils import * N_CLASSES = 20 DATA_DIR = './datasets/CIHP' LIST_PATH = './datasets/CIHP/list/val2.txt' DATA_ID_LIST = './datasets/CIHP/list/val_id2.txt' with open(DATA_ID_LIST, 'r') as f: NUM_STEPS = len(f.readlines()) RESTORE_FROM = './checkpoint/CIHP_pgn' # Load reader. with tf.name_scope("create_inputs") as scp1: reader = ImageReader(DATA_DIR, LIST_PATH, DATA_ID_LIST, None, False, False, False, None) image, label, edge_gt = reader.image, reader.label, reader.edge image_rev = tf.reverse(image, tf.stack([1])) image_list = reader.image_list image_batch = tf.stack([image, image_rev]) label_batch = tf.expand_dims(label, dim=0) # Add one batch dimension. edge_gt_batch = tf.expand_dims(edge_gt, dim=0) h_orig, w_orig = tf.to_float(tf.shape(image_batch)[1]), tf.to_float(tf.shape(image_batch)[2]) image_batch050 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.50)), tf.to_int32(tf.multiply(w_orig, 0.50))])) image_batch075 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.75)), tf.to_int32(tf.multiply(w_orig, 0.75))])) image_batch125 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 1.25)), tf.to_int32(tf.multiply(w_orig, 1.25))])) image_batch150 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 1.50)), tf.to_int32(tf.multiply(w_orig, 1.50))])) image_batch175 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 1.75)), tf.to_int32(tf.multiply(w_orig, 1.75))])) ``` 新建网络 ``` # Create network. with tf.variable_scope('', reuse=False) as scope: net_100 = PGNModel({'data': image_batch}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_050 = PGNModel({'data': image_batch050}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_075 = PGNModel({'data': image_batch075}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_125 = PGNModel({'data': image_batch125}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_150 = PGNModel({'data': image_batch150}, is_training=False, n_classes=N_CLASSES) with tf.variable_scope('', reuse=True): net_175 = PGNModel({'data': image_batch175}, is_training=False, n_classes=N_CLASSES) # parsing net parsing_out1_050 = net_050.layers['parsing_fc'] parsing_out1_075 = net_075.layers['parsing_fc'] parsing_out1_100 = net_100.layers['parsing_fc'] parsing_out1_125 = net_125.layers['parsing_fc'] parsing_out1_150 = net_150.layers['parsing_fc'] parsing_out1_175 = net_175.layers['parsing_fc'] parsing_out2_050 = net_050.layers['parsing_rf_fc'] parsing_out2_075 = net_075.layers['parsing_rf_fc'] parsing_out2_100 = net_100.layers['parsing_rf_fc'] parsing_out2_125 = net_125.layers['parsing_rf_fc'] parsing_out2_150 = net_150.layers['parsing_rf_fc'] parsing_out2_175 = net_175.layers['parsing_rf_fc'] # edge net edge_out2_100 = net_100.layers['edge_rf_fc'] edge_out2_125 = net_125.layers['edge_rf_fc'] edge_out2_150 = net_150.layers['edge_rf_fc'] edge_out2_175 = net_175.layers['edge_rf_fc'] # combine resize parsing_out1 = tf.reduce_mean(tf.stack([tf.image.resize_images(parsing_out1_050, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_075, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_100, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_125, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_150, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out1_175, tf.shape(image_batch)[1:3,])]), axis=0) parsing_out2 = tf.reduce_mean(tf.stack([tf.image.resize_images(parsing_out2_050, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_075, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_100, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_125, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_150, tf.shape(image_batch)[1:3,]), tf.image.resize_images(parsing_out2_175, tf.shape(image_batch)[1:3,])]), axis=0) edge_out2_100 = tf.image.resize_images(edge_out2_100, tf.shape(image_batch)[1:3,]) edge_out2_125 = tf.image.resize_images(edge_out2_125, tf.shape(image_batch)[1:3,]) edge_out2_150 = tf.image.resize_images(edge_out2_150, tf.shape(image_batch)[1:3,]) edge_out2_175 = tf.image.resize_images(edge_out2_175, tf.shape(image_batch)[1:3,]) edge_out2 = tf.reduce_mean(tf.stack([edge_out2_100, edge_out2_125, edge_out2_150, edge_out2_175]), axis=0) raw_output = tf.reduce_mean(tf.stack([parsing_out1, parsing_out2]), axis=0) head_output, tail_output = tf.unstack(raw_output, num=2, axis=0) tail_list = tf.unstack(tail_output, num=20, axis=2) tail_list_rev = [None] * 20 for xx in range(14): tail_list_rev[xx] = tail_list[xx] tail_list_rev[14] = tail_list[15] tail_list_rev[15] = tail_list[14] tail_list_rev[16] = tail_list[17] tail_list_rev[17] = tail_list[16] tail_list_rev[18] = tail_list[19] tail_list_rev[19] = tail_list[18] tail_output_rev = tf.stack(tail_list_rev, axis=2) tail_output_rev = tf.reverse(tail_output_rev, tf.stack([1])) raw_output_all = tf.reduce_mean(tf.stack([head_output, tail_output_rev]), axis=0) raw_output_all = tf.expand_dims(raw_output_all, dim=0) pred_scores = tf.reduce_max(raw_output_all, axis=3) raw_output_all = tf.argmax(raw_output_all, axis=3) pred_all = tf.expand_dims(raw_output_all, dim=3) # Create 4-d tensor. raw_edge = tf.reduce_mean(tf.stack([edge_out2]), axis=0) head_output, tail_output = tf.unstack(raw_edge, num=2, axis=0) tail_output_rev = tf.reverse(tail_output, tf.stack([1])) raw_edge_all = tf.reduce_mean(tf.stack([head_output, tail_output_rev]), axis=0) raw_edge_all = tf.expand_dims(raw_edge_all, dim=0) pred_edge = tf.sigmoid(raw_edge_all) res_edge = tf.cast(tf.greater(pred_edge, 0.5), tf.int32) # prepare ground truth preds = tf.reshape(pred_all, [-1,]) gt = tf.reshape(label_batch, [-1,]) weights = tf.cast(tf.less_equal(gt, N_CLASSES - 1), tf.int32) # Ignoring all labels greater than or equal to n_classes. mIoU, update_op_iou = tf.contrib.metrics.streaming_mean_iou(preds, gt, num_classes=N_CLASSES, weights=weights) macc, update_op_acc = tf.contrib.metrics.streaming_accuracy(preds, gt, weights=weights) # # Which variables to load. # restore_var = tf.global_variables() # # Set up tf session and initialize variables. # config = tf.ConfigProto() # config.gpu_options.allow_growth = True # # gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.7) # # config=tf.ConfigProto(gpu_options=gpu_options) # init = tf.global_variables_initializer() # evaluate prosessing parsing_dir = './output' # Set up tf session and initialize variables. config = tf.ConfigProto() config.gpu_options.allow_growth = True ``` 以上是初始化网络和初始化参数载入模型,下面定义两个函数分别处理val1.txt和val2.txt两个列表内部的数据。 ``` # 处理第一个列表函数 def humanParsing1(): # Which variables to load. restore_var = tf.global_variables() init = tf.global_variables_initializer() with tf.Session(config=config) as sess: sess.run(init) sess.run(tf.local_variables_initializer()) # Load weights. loader = tf.train.Saver(var_list=restore_var) if RESTORE_FROM is not None: if load(loader, sess, RESTORE_FROM): print(" [*] Load SUCCESS") else: print(" [!] Load failed...") # Create queue coordinator. coord = tf.train.Coordinator() # Start queue threads. threads = tf.train.start_queue_runners(coord=coord, sess=sess) # Iterate over training steps. for step in range(NUM_STEPS): # parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge])# , update_op parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge]) # , update_op print('step {:d}'.format(step)) print(image_list[step]) img_split = image_list[step].split('/') img_id = img_split[-1][:-4] msk = decode_labels(parsing_, num_classes=N_CLASSES) parsing_im = Image.fromarray(msk[0]) parsing_im.save('{}/{}_vis.png'.format(parsing_dir, img_id)) coord.request_stop() coord.join(threads) # 处理第二个列表函数 def humanParsing2(): # Set up tf session and initialize variables. config = tf.ConfigProto() config.gpu_options.allow_growth = True # gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.7) # config=tf.ConfigProto(gpu_options=gpu_options) # Which variables to load. restore_var = tf.global_variables() init = tf.global_variables_initializer() with tf.Session(config=config) as sess: # Create queue coordinator. coord = tf.train.Coordinator() sess.run(init) sess.run(tf.local_variables_initializer()) # Load weights. loader = tf.train.Saver(var_list=restore_var) if RESTORE_FROM is not None: if load(loader, sess, RESTORE_FROM): print(" [*] Load SUCCESS") else: print(" [!] Load failed...") LIST_PATH = './datasets/CIHP/list/val1.txt' DATA_ID_LIST = './datasets/CIHP/list/val_id1.txt' with open(DATA_ID_LIST, 'r') as f: NUM_STEPS = len(f.readlines()) # with tf.name_scope("create_inputs"): with tf.name_scope(scp1): tf.get_variable_scope().reuse_variables() reader = ImageReader(DATA_DIR, LIST_PATH, DATA_ID_LIST, None, False, False, False, coord) image, label, edge_gt = reader.image, reader.label, reader.edge image_rev = tf.reverse(image, tf.stack([1])) image_list = reader.image_list # Start queue threads. threads = tf.train.start_queue_runners(coord=coord, sess=sess) # Load weights. loader = tf.train.Saver(var_list=restore_var) if RESTORE_FROM is not None: if load(loader, sess, RESTORE_FROM): print(" [*] Load SUCCESS") else: print(" [!] Load failed...") # Iterate over training steps. for step in range(NUM_STEPS): # parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge])# , update_op parsing_, scores, edge_ = sess.run([pred_all, pred_scores, pred_edge]) # , update_op print('step {:d}'.format(step)) print(image_list[step]) img_split = image_list[step].split('/') img_id = img_split[-1][:-4] msk = decode_labels(parsing_, num_classes=N_CLASSES) parsing_im = Image.fromarray(msk[0]) parsing_im.save('{}/{}_vis.png'.format(parsing_dir, img_id)) coord.request_stop() coord.join(threads) if __name__ == '__main__': humanParsing1() humanParsing2() ``` 最终输出结果一直是第一个列表里面的循环,代码上用了 self.queue = tf.train.slice_input_producer([self.images, self.labels, self.edges], shuffle=shuffle),队列的方式进行多线程推理。最终得到的结果一直是第一个列表的循环,求大神告诉问题怎么解决。
tensorflow上的一个案例mnist,运行出错,求问
from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf # Import data mnist = input_data.read_data_sets('MNIST_data/', one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784]) W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.matmul(x, W) + b # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10]) # The raw formulation of cross-entropy, # # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)), # reduction_indices=[1])) # # can be numerically unstable. # # So here we use tf.nn.softmax_cross_entropy_with_logits on the raw # outputs of 'y', and then average across the batch. cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) sess = tf.InteractiveSession() tf.global_variables_initializer().run() # Train for _ in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) 错误如下: Traceback (most recent call last): File "/home/linbinghui/文档/pycode/Text-1.py", line 5, in <module> mnist = input_data.read_data_sets('MNIST_data/', one_hot=True) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py", line 189, in read_data_sets local_file = maybe_download(TEST_IMAGES, train_dir, SOURCE_URL + TEST_IMAGES) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py", line 81, in m aybe_download urllib.request.urlretrieve(source_url, temp_file_name) File "/usr/lib/python2.7/urllib.py", line 98, in urlretrieve return opener.retrieve(url, filename, reporthook, data) File "/usr/lib/python2.7/urllib.py", line 245, in retrieve fp = self.open(url, data) File "/usr/lib/python2.7/urllib.py", line 213, in open return getattr(self, name)(url) File "/usr/lib/python2.7/urllib.py", line 364, in open_http return self.http_error(url, fp, errcode, errmsg, headers) File "/usr/lib/python2.7/urllib.py", line 377, in http_error result = method(url, fp, errcode, errmsg, headers) File "/usr/lib/python2.7/urllib.py", line 642, in http_error_302 headers, data) File "/usr/lib/python2.7/urllib.py", line 669, in redirect_internal return self.open(newurl) File "/usr/lib/python2.7/urllib.py", line 213, in open return getattr(self, name)(url) File "/usr/lib/python2.7/urllib.py", line 350, in open_http h.endheaders(data) File "/usr/lib/python2.7/httplib.py", line 1053, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 897, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 859, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 836, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 575, in create_connection raise err IOError: [Errno socket error] [Errno 111] Connection refused
Tensorflow用自己的图片做数据集做识别,无法feed数据,跪求大神帮助!
使用tensorflow识别我自己的tfrecord文件时,在训练时无法feed数据,错误是placeholder那里,下面给出错误和我的代码,跪求大神帮助!!! 错误: ``` Traceback (most recent call last): File "/Users/hanjiarong/PycharmProjects/sample5/main.py", line 206, in <module> session.run(opti, feed_dict={x: session.run(batch_image), y: session.run(batch_label), keep_drop: dropout}) File "/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 905, in run run_metadata_ptr) File "/anaconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1113, in _run str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (1, 227, 227, 3) for Tensor 'Placeholder:0', which has shape '(154587, ?)' ``` 下面是我的代码: ``` import tensorflow as tf from encode_to_tfrecords import create_record, create_test_record, read_and_decode, get_batch, get_test_batch n_input = 154587 n_classes = 3 dropout = 0.5 x = tf.placeholder(tf.float32, [None, n_input]) y = tf.placeholder(tf.int32, [None, n_classes]) keep_drop = tf.placeholder(tf.float32) class network(object): def inference(self, images,keep_drop): #################################################################################################################### # 向量转为矩阵 # images = tf.reshape(images, shape=[-1, 39,39, 3]) images = tf.reshape(images, shape=[-1, 227, 227, 3]) # [batch, in_height, in_width, in_channels] images = (tf.cast(images, tf.float32) / 255. - 0.5) * 2 # 归一化处理 #################################################################################################################### # 第一层 定义卷积偏置和下采样 conv1 = tf.nn.bias_add(tf.nn.conv2d(images, self.weights['conv1'], strides=[1, 4, 4, 1], padding='VALID'), self.biases['conv1']) relu1 = tf.nn.relu(conv1) pool1 = tf.nn.max_pool(relu1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') # 第二层 conv2 = tf.nn.bias_add(tf.nn.conv2d(pool1, self.weights['conv2'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv2']) relu2 = tf.nn.relu(conv2) pool2 = tf.nn.max_pool(relu2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') # 第三层 conv3 = tf.nn.bias_add(tf.nn.conv2d(pool2, self.weights['conv3'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv3']) relu3 = tf.nn.relu(conv3) # pool3=tf.nn.max_pool(relu3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') conv4 = tf.nn.bias_add(tf.nn.conv2d(relu3, self.weights['conv4'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv4']) relu4 = tf.nn.relu(conv4) conv5 = tf.nn.bias_add(tf.nn.conv2d(relu4, self.weights['conv5'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv5']) relu5 = tf.nn.relu(conv5) pool5 = tf.nn.max_pool(relu5, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') # 全连接层1,先把特征图转为向量 flatten = tf.reshape(pool5, [-1, self.weights['fc1'].get_shape().as_list()[0]]) # dropout比率选用0.5 drop1 = tf.nn.dropout(flatten, keep_drop) fc1 = tf.matmul(drop1, self.weights['fc1']) + self.biases['fc1'] fc_relu1 = tf.nn.relu(fc1) fc2 = tf.matmul(fc_relu1, self.weights['fc2']) + self.biases['fc2'] fc_relu2 = tf.nn.relu(fc2) fc3 = tf.matmul(fc_relu2, self.weights['fc3']) + self.biases['fc3'] return fc3 def __init__(self): # 初始化权值和偏置 with tf.variable_scope("weights"): self.weights = { # 39*39*3->36*36*20->18*18*20 'conv1': tf.get_variable('conv1', [11, 11, 3, 96], initializer=tf.contrib.layers.xavier_initializer_conv2d()), # 18*18*20->16*16*40->8*8*40 'conv2': tf.get_variable('conv2', [5, 5, 96, 256], initializer=tf.contrib.layers.xavier_initializer_conv2d()), # 8*8*40->6*6*60->3*3*60 'conv3': tf.get_variable('conv3', [3, 3, 256, 384], initializer=tf.contrib.layers.xavier_initializer_conv2d()), # 3*3*60->120 'conv4': tf.get_variable('conv4', [3, 3, 384, 384], initializer=tf.contrib.layers.xavier_initializer_conv2d()), 'conv5': tf.get_variable('conv5', [3, 3, 384, 256], initializer=tf.contrib.layers.xavier_initializer_conv2d()), 'fc1': tf.get_variable('fc1', [6 * 6 * 256, 4096], initializer=tf.contrib.layers.xavier_initializer()), 'fc2': tf.get_variable('fc2', [4096, 4096], initializer=tf.contrib.layers.xavier_initializer()), 'fc3': tf.get_variable('fc3', [4096, 1000], initializer=tf.contrib.layers.xavier_initializer()), } with tf.variable_scope("biases"): self.biases = { 'conv1': tf.get_variable('conv1', [96, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv2': tf.get_variable('conv2', [256, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv3': tf.get_variable('conv3', [384, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv4': tf.get_variable('conv4', [384, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv5': tf.get_variable('conv5', [256, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'fc1': tf.get_variable('fc1', [4096, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'fc2': tf.get_variable('fc2', [4096, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'fc3': tf.get_variable('fc3', [1000, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)) } # 计算softmax交叉熵损失函数 def sorfmax_loss(self, predicts, labels): predicts = tf.nn.softmax(predicts) labels = tf.one_hot(labels, self.weights['fc3'].get_shape().as_list()[1]) loss = tf.nn.softmax_cross_entropy_with_logits(logits=predicts, labels=labels) # loss =-tf.reduce_mean(labels * tf.log(predicts))# tf.nn.softmax_cross_entropy_with_logits(predicts, labels) self.cost = loss return self.cost # 梯度下降 def optimer(self, loss, lr=0.01): train_optimizer = tf.train.GradientDescentOptimizer(lr).minimize(loss) return train_optimizer #定义训练 # def train(self): create_record('/Users/hanjiarong/Documents/testdata/tfrtrain') # image, label = read_and_decode('train.tfrecords') # batch_image, batch_label = get_batch(image, label, 30) #连接网络 网络训练 net = network() inf = net.inference(x, dropout) loss = net.sorfmax_loss(inf,y) opti = net.optimer(loss) correct_pred = tf.equal(tf.argmax(inf, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # #定义测试 create_test_record('/Users/hanjiarong/Documents/testdata/tfrtest') # image_t, label_t = read_and_decode('test.tfrecords') # batch_test_image, batch_test_label = get_test_batch(image_t, label_t, 50) # # #生成测试 image, label = read_and_decode('train.tfrecords') batch_image, batch_label = get_batch(image, label, 1) # val, l = session.run([batch_image, batch_label]) # print(val.shape, l) with tf.Session() as session: init = tf.initialize_all_variables() session.run(init) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) max_iter = 100000 iter = 1 print("begin1") while iter * 30 < max_iter: # loss_np, _, label_np, image_np, inf_np = session.run([loss, opti, batch_label, batch_image, inf]) session.run(opti, feed_dict={x: session.run(batch_image), y: session.run(batch_label), keep_drop: dropout}) print("begin6") if iter % 10 == 0: loss, acc = session.run([loss, accuracy], feed_dict={x: batch_image, y: batch_label, keep_drop: 1.}) print("Iter " + str(iter * 30) + ", Minibatch Loss= " + \ "{:.6f}".format(loss) + ", Training Accuracy= " + "{:.5f}".format(acc)) iter += 1 print("Optimization Finished!") image, label = read_and_decode('test.tfrecords') batch_test_image, batch_test_label = get_batch(image, label, 2) img_test, lab_test = session.run([batch_test_image, batch_test_label]) test_accuracy = session.run(accuracy, feed_dict={x: img_test, y: lab_test, keep_drop: 1.}) print("Testing Accuracy:", test_accuracy) ```
tensorflow在Ubuntu16.04下训练问题
tensorflow.python.framework.errors_impl.NotFoundError: /opt/tensorflow/bazel-bin/tensorflow/examples/image_retraining/retrain.runfiles/org_tensorflow/tensorflow/contrib/data/python/ops/../../_prefetching_ops.so: undefined symbol: _ZN6google8protobuf8internal26fixed_address_empty_stringB5cxx11E 对于这个问题没有任何想法,希望有好心人能提点建议
tensorflow当中的loss里面的logits可不可以是placeholder
我使用tensorflow实现手写数字识别,我希望softmax_cross_entropy_with_logits里面的logits先用一个placeholder表示,然后在计算的时候再通过计算出的值再传给placeholder,但是会报错ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients。我知道直接把logits那里改成outputs就可以了,但是如果我一定要用logits的结果先是一个placeholder,我应该怎么解决呢。 ``` import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/home/as/下载/resnet-152_mnist-master/mnist_dataset", one_hot=True) from tensorflow.contrib.layers import fully_connected x = tf.placeholder(dtype=tf.float32,shape=[None,784]) y = tf.placeholder(dtype=tf.float32,shape=[None,1]) hidden1 = fully_connected(x,100,activation_fn=tf.nn.elu, weights_initializer=tf.random_normal_initializer()) hidden2 = fully_connected(hidden1,200,activation_fn=tf.nn.elu, weights_initializer=tf.random_normal_initializer()) hidden3 = fully_connected(hidden2,200,activation_fn=tf.nn.elu, weights_initializer=tf.random_normal_initializer()) outputs = fully_connected(hidden3,10,activation_fn=None, weights_initializer=tf.random_normal_initializer()) a = tf.placeholder(tf.float32,[None,10]) loss = tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=a) reduce_mean_loss = tf.reduce_mean(loss) equal_result = tf.equal(tf.argmax(outputs,1),tf.argmax(y,1)) cast_result = tf.cast(equal_result,dtype=tf.float32) accuracy = tf.reduce_mean(cast_result) train_op = tf.train.AdamOptimizer(0.001).minimize(reduce_mean_loss) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(30000): xs,ys = mnist.train.next_batch(128) result = outputs.eval(feed_dict={x:xs}) sess.run(train_op,feed_dict={a:result,y:ys}) print(i) ```
使用RNN进行手写数字识别,为什么正确率总是无法提高
我使用最简单RNN进行mnist手写数字的识别,为什么交叉商总是无法降低呢。完整代码如下。 ``` import tensorflow as tf from tensorflow.contrib.layers import fully_connected from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('/home/as/mnist_dataset', one_hot=True) n_steps = 28 n_inputs = 28 n_neurons = 100 x = tf.placeholder(tf.float32,[None,n_steps,n_inputs]) action_one_hot = tf.placeholder(tf.float32,[None,10]) basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons) output_seqs, states = tf.nn.dynamic_rnn(basic_cell,x,dtype=tf.float32) y0 = fully_connected(states,100,activation_fn=tf.nn.relu) y = fully_connected(y0,10) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=action_one_hot, logits=y) mean_loss = tf.reduce_mean(cross_entropy) trian_op = tf.train.AdamOptimizer(0.001).minimize(mean_loss) with tf.Session() as sess: for i in range(10000): sess.run(tf.global_variables_initializer()) x1,y1 = mnist.train.next_batch(1000) x1 = x1.reshape((-1,n_steps,n_inputs)) sess.run(trian_op,feed_dict={x:x1,action_one_hot:y1}) if i%200==0: print(sess.run(mean_loss,feed_dict={x:x1,action_one_hot:y1})) ``` 就是在每200步输出一下交叉商,但是这个交叉商总是无法下降。
Tensorflow利用自制的数据集做图像识别,程序卡在读取tfrecord文件不跑
我利用自己的图片做了一个数据集训练神经网络,在feed数据的时候数据类型不合适,加了session.run()程序就卡在这里不动了,下面贴出代码,跪求大神指导。 程序卡在print(“begin4”)和print(“begin5”)之间 ``` import tensorflow as tf from encode_to_tfrecords import create_record, create_test_record, read_and_decode, get_batch, get_test_batch n_input = 154587 n_classes = 3 dropout = 0.5 x = tf.placeholder(tf.float32, [None, n_input]) y = tf.placeholder(tf.int32, [None, n_classes]) keep_drop = tf.placeholder(tf.float32) class network(object): def inference(self, images,keep_drop): #################################################################################################################### # 向量转为矩阵 # images = tf.reshape(images, shape=[-1, 39,39, 3]) images = tf.reshape(images, shape=[-1, 227, 227, 3]) # [batch, in_height, in_width, in_channels] images = (tf.cast(images, tf.float32) / 255. - 0.5) * 2 # 归一化处理 #################################################################################################################### # 第一层 定义卷积偏置和下采样 conv1 = tf.nn.bias_add(tf.nn.conv2d(images, self.weights['conv1'], strides=[1, 4, 4, 1], padding='VALID'), self.biases['conv1']) relu1 = tf.nn.relu(conv1) pool1 = tf.nn.max_pool(relu1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') # 第二层 conv2 = tf.nn.bias_add(tf.nn.conv2d(pool1, self.weights['conv2'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv2']) relu2 = tf.nn.relu(conv2) pool2 = tf.nn.max_pool(relu2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') # 第三层 conv3 = tf.nn.bias_add(tf.nn.conv2d(pool2, self.weights['conv3'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv3']) relu3 = tf.nn.relu(conv3) # pool3=tf.nn.max_pool(relu3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') conv4 = tf.nn.bias_add(tf.nn.conv2d(relu3, self.weights['conv4'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv4']) relu4 = tf.nn.relu(conv4) conv5 = tf.nn.bias_add(tf.nn.conv2d(relu4, self.weights['conv5'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv5']) relu5 = tf.nn.relu(conv5) pool5 = tf.nn.max_pool(relu5, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') # 全连接层1,先把特征图转为向量 flatten = tf.reshape(pool5, [-1, self.weights['fc1'].get_shape().as_list()[0]]) # dropout比率选用0.5 drop1 = tf.nn.dropout(flatten, keep_drop) fc1 = tf.matmul(drop1, self.weights['fc1']) + self.biases['fc1'] fc_relu1 = tf.nn.relu(fc1) fc2 = tf.matmul(fc_relu1, self.weights['fc2']) + self.biases['fc2'] fc_relu2 = tf.nn.relu(fc2) fc3 = tf.matmul(fc_relu2, self.weights['fc3']) + self.biases['fc3'] return fc3 def __init__(self): # 初始化权值和偏置 with tf.variable_scope("weights"): self.weights = { # 39*39*3->36*36*20->18*18*20 'conv1': tf.get_variable('conv1', [11, 11, 3, 96], initializer=tf.contrib.layers.xavier_initializer_conv2d()), # 18*18*20->16*16*40->8*8*40 'conv2': tf.get_variable('conv2', [5, 5, 96, 256], initializer=tf.contrib.layers.xavier_initializer_conv2d()), # 8*8*40->6*6*60->3*3*60 'conv3': tf.get_variable('conv3', [3, 3, 256, 384], initializer=tf.contrib.layers.xavier_initializer_conv2d()), # 3*3*60->120 'conv4': tf.get_variable('conv4', [3, 3, 384, 384], initializer=tf.contrib.layers.xavier_initializer_conv2d()), 'conv5': tf.get_variable('conv5', [3, 3, 384, 256], initializer=tf.contrib.layers.xavier_initializer_conv2d()), 'fc1': tf.get_variable('fc1', [6 * 6 * 256, 4096], initializer=tf.contrib.layers.xavier_initializer()), 'fc2': tf.get_variable('fc2', [4096, 4096], initializer=tf.contrib.layers.xavier_initializer()), 'fc3': tf.get_variable('fc3', [4096, 1000], initializer=tf.contrib.layers.xavier_initializer()), } with tf.variable_scope("biases"): self.biases = { 'conv1': tf.get_variable('conv1', [96, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv2': tf.get_variable('conv2', [256, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv3': tf.get_variable('conv3', [384, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv4': tf.get_variable('conv4', [384, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv5': tf.get_variable('conv5', [256, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'fc1': tf.get_variable('fc1', [4096, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'fc2': tf.get_variable('fc2', [4096, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'fc3': tf.get_variable('fc3', [1000, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)) } # 计算softmax交叉熵损失函数 def sorfmax_loss(self, predicts, labels): predicts = tf.nn.softmax(predicts) labels = tf.one_hot(labels, self.weights['fc3'].get_shape().as_list()[1]) loss = tf.nn.softmax_cross_entropy_with_logits(logits=predicts, labels=labels) # loss =-tf.reduce_mean(labels * tf.log(predicts))# tf.nn.softmax_cross_entropy_with_logits(predicts, labels) self.cost = loss return self.cost # 梯度下降 def optimer(self, loss, lr=0.01): train_optimizer = tf.train.GradientDescentOptimizer(lr).minimize(loss) return train_optimizer #定义训练 # def train(self): create_record('/Users/hanjiarong/Documents/testdata/tfrtrain') # image, label = read_and_decode('train.tfrecords') # batch_image, batch_label = get_batch(image, label, 30) #连接网络 网络训练 net = network() inf = net.inference(x, dropout) loss = net.sorfmax_loss(inf,y) opti = net.optimer(loss) correct_pred = tf.equal(tf.argmax(inf, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # #定义测试 create_test_record('/Users/hanjiarong/Documents/testdata/tfrtest') # image_t, label_t = read_and_decode('test.tfrecords') # batch_test_image, batch_test_label = get_test_batch(image_t, label_t, 50) # # #生成测试 init = tf.initialize_all_variables() with tf.Session() as session: session.run(init) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) max_iter = 100000 iter = 1 print("begin1") while iter * 30 < max_iter: print("begin2") image, label = read_and_decode('train.tfrecords') print("begin3") batch_image, batch_label = get_batch(image, label, 1) print("begin4") batch_image = session.run(batch_image) batch_label = session.run(batch_label) print("begin5") # loss_np, _, label_np, image_np, inf_np = session.run([loss, opti, batch_label, batch_image, inf]) session.run(opti, feed_dict={x: batch_image, y: batch_label, keep_drop: dropout}) print("begin6") if iter % 10 == 0: loss, acc = session.run([loss, accuracy], feed_dict={x: batch_image, y: batch_label, keep_drop: 1.}) print("Iter " + str(iter * 30) + ", Minibatch Loss= " + \ "{:.6f}".format(loss) + ", Training Accuracy= " + "{:.5f}".format(acc)) iter += 1 print("Optimization Finished!") image, label = read_and_decode('test.tfrecords') batch_test_image, batch_test_label = get_batch(image, label, 2) img_test, lab_test = session.run([batch_test_image, batch_test_label]) test_accuracy = session.run(accuracy, feed_dict={x: img_test, y: lab_test, keep_drop: 1.}) print("Testing Accuracy:", test_accuracy) ```
tensorflow RNN模型中只使用了一组输出参数?
# -*- coding: utf-8 -*- """ Created on Thu Jan 11 08:56:10 2018 @author: Administrator """ from tensorflow.contrib import rnn import numpy as np import tensorflow as tf c=np.load('C:/Users/Administrator/Desktop/jm00train.npy') d=np.load('C:/Users/Administrator/Desktop/jm00label.npy') jm00train=c[:140000] jm00test=c[140000:] c=np.float32(c) jm00trainlabel=d[:140000] jm00trainlabelonehot=tf.one_hot(jm00trainlabel,7) jm00testlabel=d[140000:] jm00testlabelonehot=tf.one_hot(jm00testlabel,7) n_inputs=38 max_time=50 lstm_size=20 n_classes=7 #batch_size=1 #n_batch= x=tf.placeholder(tf.float32,[None,50,38]) y=tf.placeholder(tf.float32,[None,7]) weights = tf.Variable(tf.truncated_normal([lstm_size, n_classes], stddev=0.1)) #初始化偏置值 biases = tf.Variable(tf.constant(0.1, shape=[n_classes])) #定义RNN网络 def RNN(X,weights,biases): # inputs=[batch_size, max_time, n_inputs] inputs = tf.reshape(X,[-1,max_time,n_inputs]) #定义LSTM基本CELL lstm_cell = rnn.BasicLSTMCell(lstm_size) # final_state[0]是cell state # final_state[1]是hidden_state outputs,final_state = tf.nn.dynamic_rnn(lstm_cell,inputs,dtype=tf.float32) results = tf.nn.softmax(tf.matmul(final_state[1],weights) + biases) return results #计算RNN的返回结果 prediction= RNN(x, weights, biases) #损失函数 cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y)) #使用AdamOptimizer进行优化 train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) #结果存放在一个布尔型列表中 correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))#argmax返回一维张量中最大的值所在的位置 #求准确率 accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)#把correct_prediction变为float32类型 #初始化 #init= #init = tf.global_variables_initializer() #init=tf.global_variables_initializer() init=tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) sess.run(train_step,feed_dict={x:jm00train,y:jm00trainlabel}) acc = sess.run(accuracy,feed_dict={x:jm00test,y:jm00testlabelonehot}) print ("Iter " + ", Testing Accuracy= " + str(acc))
怎么在TensorFlow上导入ImageNet数据进行试验?
请问一下,将数据集转为tfrecord格式之后,自己load数据的时候经常跑到一半报错 ``` tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_1_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: shuffle_batch = QueueDequeueUpToV2[component_types=[DT_UINT8, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]] ``` 怎么回事,我这部分的代码大致是这样的: ``` import os import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import tensorflow.contrib.slim as slim from PIL import Image tfrecord_paths = "./ImageNet_validate.tfrecord" def read_and_decode(filename): #根据文件名生成一个队列 filename_queue = tf.train.string_input_producer([filename]) reader = tf.TFRecordReader() _, serialized_example = reader.read(filename_queue) #返回文件名和文件 features = tf.parse_single_example(serialized_example, features={ 'label': tf.FixedLenFeature([], tf.int64), 'image' : tf.FixedLenFeature([], tf.string), }) img = tf.decode_raw(features['image'], tf.uint8) img = tf.reshape(img,[1,433200]) # img = tf.reshape(img, [380, 380, 3]) # img = tf.cast(img, tf.float32) * (1. / 255) - 0.5 label = tf.cast(features['label'], tf.int32) return img, label img, label = read_and_decode(tfrecord_paths) img_batch, label_batch = tf.train.shuffle_batch([img, label], batch_size=1, capacity=1000, num_threads = 512, allow_smaller_final_batch=True, min_after_dequeue=1) global_init = tf.global_variables_initializer() local_init = tf.local_variables_initializer() with tf.Session() as sess: sess.run(global_init) sess.run(local_init) coord=tf.train.Coordinator() threads= tf.train.start_queue_runners(coord=coord) for i in range(1000): print(i) print("image:",img_batch.get_shape().as_list()) print("label:",label_batch.get_shape().as_list()) val, l= sess.run([img_batch,label_batch]) print(val.shape, l) ```
终于明白阿里百度这样的大公司,为什么面试经常拿ThreadLocal考验求职者了
点击上面↑「爱开发」关注我们每晚10点,捕获技术思考和创业资源洞察什么是ThreadLocalThreadLocal是一个本地线程副本变量工具类,各个线程都拥有一份线程私...
《奇巧淫技》系列-python!!每天早上八点自动发送天气预报邮件到QQ邮箱
将代码部署服务器,每日早上定时获取到天气数据,并发送到邮箱。 也可以说是一个小人工智障。 思路可以运用在不同地方,主要介绍的是思路。
面试官问我:什么是消息队列?什么场景需要他?用了会出现什么问题?
你知道的越多,你不知道的越多 点赞再看,养成习惯 GitHub上已经开源 https://github.com/JavaFamily 有一线大厂面试点脑图、个人联系方式和人才交流群,欢迎Star和完善 前言 消息队列在互联网技术存储方面使用如此广泛,几乎所有的后端技术面试官都要在消息队列的使用和原理方面对小伙伴们进行360°的刁难。 作为一个在互联网公司面一次拿一次Offer的面霸...
8年经验面试官详解 Java 面试秘诀
作者 |胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。 Java程序员准备和投递简历的实...
究竟你适不适合买Mac?
我清晰的记得,刚买的macbook pro回到家,开机后第一件事情,就是上了淘宝网,花了500元钱,找了一个上门维修电脑的师傅,上门给我装了一个windows系统。。。。。。 表砍我。。。 当时买mac的初衷,只是想要个固态硬盘的笔记本,用来运行一些复杂的扑克软件。而看了当时所有的SSD笔记本后,最终决定,还是买个好(xiong)看(da)的。 已经有好几个朋友问我mba怎么样了,所以今天尽量客观...
MyBatis研习录(01)——MyBatis概述与入门
MyBatis 是一款优秀的持久层框架,它支持定制化 SQL、存储过程以及高级映射。MyBatis原本是apache的一个开源项目iBatis, 2010年该项目由apache software foundation 迁移到了google code并改名为MyBatis 。2013年11月MyBatis又迁移到Github。
程序员一般通过什么途径接私活?
二哥,你好,我想知道一般程序猿都如何接私活,我也想接,能告诉我一些方法吗? 上面是一个读者“烦不烦”问我的一个问题。其实不止是“烦不烦”,还有很多读者问过我类似这样的问题。 我接的私活不算多,挣到的钱也没有多少,加起来不到 20W。说实话,这个数目说出来我是有点心虚的,毕竟太少了,大家轻喷。但我想,恰好配得上“一般程序员”这个称号啊。毕竟苍蝇再小也是肉,我也算是有经验的人了。 唾弃接私活、做外...
Python爬虫爬取淘宝,京东商品信息
小编是一个理科生,不善长说一些废话。简单介绍下原理然后直接上代码。 使用的工具(Python+pycharm2019.3+selenium+xpath+chromedriver)其中要使用pycharm也可以私聊我selenium是一个框架可以通过pip下载 pip installselenium -ihttps://pypi.tuna.tsinghua.edu.cn/simple/ ...
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
Java工作4年来应聘要16K最后没要,细节如下。。。
前奏: 今天2B哥和大家分享一位前几天面试的一位应聘者,工作4年26岁,统招本科。 以下就是他的简历和面试情况。 基本情况: 专业技能: 1、&nbsp;熟悉Sping了解SpringMVC、SpringBoot、Mybatis等框架、了解SpringCloud微服务 2、&nbsp;熟悉常用项目管理工具:SVN、GIT、MAVEN、Jenkins 3、&nbsp;熟悉Nginx、tomca...
Python爬虫精简步骤1 获取数据
爬虫,从本质上来说,就是利用程序在网上拿到对我们有价值的数据。 爬虫能做很多事,能做商业分析,也能做生活助手,比如:分析北京近两年二手房成交均价是多少?广州的Python工程师平均薪资是多少?北京哪家餐厅粤菜最好吃?等等。 这是个人利用爬虫所做到的事情,而公司,同样可以利用爬虫来实现巨大的商业价值。比如你所熟悉的搜索引擎——百度和谷歌,它们的核心技术之一也是爬虫,而且是超级爬虫。 从搜索巨头到人工...
Python绘图,圣诞树,花,爱心 | Turtle篇
每周每日,分享Python实战代码,入门资料,进阶资料,基础语法,爬虫,数据分析,web网站,机器学习,深度学习等等。 公众号回复【进群】沟通交流吧,QQ扫码进群学习吧 微信群 QQ群 1.画圣诞树 import turtle screen = turtle.Screen() screen.setup(800,600) circle = turtle.Turtle()...
作为一个程序员,CPU的这些硬核知识你必须会!
CPU对每个程序员来说,是个既熟悉又陌生的东西? 如果你只知道CPU是中央处理器的话,那可能对你并没有什么用,那么作为程序员的我们,必须要搞懂的就是CPU这家伙是如何运行的,尤其要搞懂它里面的寄存器是怎么一回事,因为这将让你从底层明白程序的运行机制。 随我一起,来好好认识下CPU这货吧 把CPU掰开来看 对于CPU来说,我们首先就要搞明白它是怎么回事,也就是它的内部构造,当然,CPU那么牛的一个东...
破14亿,Python分析我国存在哪些人口危机!
一、背景 二、爬取数据 三、数据分析 1、总人口 2、男女人口比例 3、人口城镇化 4、人口增长率 5、人口老化(抚养比) 6、各省人口 7、世界人口 四、遇到的问题 遇到的问题 1、数据分页,需要获取从1949-2018年数据,观察到有近20年参数:LAST20,由此推测获取近70年的参数可设置为:LAST70 2、2019年数据没有放上去,可以手动添加上去 3、将数据进行 行列转换 4、列名...
web前端javascript+jquery知识点总结
1.Javascript 语法.用途 javascript 在前端网页中占有非常重要的地位,可以用于验证表单,制作特效等功能,它是一种描述语言,也是一种基于对象(Object)和事件驱动并具有安全性的脚本语言 ...
Python实战:抓肺炎疫情实时数据,画2019-nCoV疫情地图
今天,群里白垩老师问如何用python画武汉肺炎疫情地图。白垩老师是研究海洋生态与地球生物的学者,国家重点实验室成员,于不惑之年学习python,实为我等学习楷模。先前我并没有关注武汉肺炎的具体数据,也没有画过类似的数据分布图。于是就拿了两个小时,专门研究了一下,遂成此文。
听说想当黑客的都玩过这个Monyer游戏(1~14攻略)
第零关 进入传送门开始第0关(游戏链接) 请点击链接进入第1关: 连接在左边→ ←连接在右边 看不到啊。。。。(只能看到一堆大佬做完的留名,也能看到菜鸡的我,在后面~~) 直接fn+f12吧 &lt;span&gt;连接在左边→&lt;/span&gt; &lt;a href="first.php"&gt;&lt;/a&gt; &lt;span&gt;←连接在右边&lt;/span&gt; o...
在家远程办公效率低?那你一定要收好这个「在家办公」神器!
相信大家都已经收到国务院延长春节假期的消息,接下来,在家远程办公可能将会持续一段时间。 但是问题来了。远程办公不是人在电脑前就当坐班了,相反,对于沟通效率,文件协作,以及信息安全都有着极高的要求。有着非常多的挑战,比如: 1在异地互相不见面的会议上,如何提高沟通效率? 2文件之间的来往反馈如何做到及时性?如何保证信息安全? 3如何规划安排每天工作,以及如何进行成果验收? ...... ...
作为一个程序员,内存和磁盘的这些事情,你不得不知道啊!!!
截止目前,我已经分享了如下几篇文章: 一个程序在计算机中是如何运行的?超级干货!!! 作为一个程序员,CPU的这些硬核知识你必须会! 作为一个程序员,内存的这些硬核知识你必须懂! 这些知识可以说是我们之前都不太重视的基础知识,可能大家在上大学的时候都学习过了,但是嘞,当时由于老师讲解的没那么有趣,又加上这些知识本身就比较枯燥,所以嘞,大家当初几乎等于没学。 再说啦,学习这些,也看不出来有什么用啊!...
渗透测试-灰鸽子远控木马
木马概述 灰鸽子( Huigezi),原本该软件适用于公司和家庭管理,其功能十分强大,不但能监视摄像头、键盘记录、监控桌面、文件操作等。还提供了黑客专用功能,如:伪装系统图标、随意更换启动项名称和表述、随意更换端口、运行后自删除、毫无提示安装等,并采用反弹链接这种缺陷设计,使得使用者拥有最高权限,一经破解即无法控制。最终导致被黑客恶意使用。原作者的灰鸽子被定义为是一款集多种控制方式于一体的木马程序...
Python:爬取疫情每日数据
前言 目前每天各大平台,如腾讯、今日头条都会更新疫情每日数据,他们的数据源都是一样的,主要都是通过各地的卫健委官网通报。 以全国、湖北和上海为例,分别为以下三个网站: 国家卫健委官网:http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml 湖北卫健委官网:http://wjw.hubei.gov.cn/bmdt/ztzl/fkxxgzbdgrfyyq/xxfb...
这个世界上人真的分三六九等,你信吗?
偶然间,在知乎上看到一个问题 一时间,勾起了我深深的回忆。 以前在厂里打过两次工,做过家教,干过辅导班,做过中介。零下几度的晚上,贴过广告,满脸、满手地长冻疮。 再回首那段岁月,虽然苦,但让我学会了坚持和忍耐。让我明白了,在这个世界上,无论环境多么的恶劣,只要心存希望,星星之火,亦可燎原。 下文是原回答,希望能对你能有所启发。 如果我说,这个世界上人真的分三六九等,...
B 站上有哪些很好的学习资源?
哇说起B站,在小九眼里就是宝藏般的存在,放年假宅在家时一天刷6、7个小时不在话下,更别提今年的跨年晚会,我简直是跪着看完的!! 最早大家聚在在B站是为了追番,再后来我在上面刷欧美新歌和漂亮小姐姐的舞蹈视频,最近两年我和周围的朋友们已经把B站当作学习教室了,而且学习成本还免费,真是个励志的好平台ヽ(.◕ฺˇд ˇ◕ฺ;)ノ 下面我们就来盘点一下B站上优质的学习资源: 综合类 Oeasy: 综合...
雷火神山直播超两亿,Web播放器事件监听是怎么实现的?
Web播放器解决了在手机浏览器和PC浏览器上播放音视频数据的问题,让视音频内容可以不依赖用户安装App,就能进行播放以及在社交平台进行传播。在视频业务大数据平台中,播放数据的统计分析非常重要,所以Web播放器在使用过程中,需要对其内部的数据进行收集并上报至服务端,此时,就需要对发生在其内部的一些播放行为进行事件监听。 那么Web播放器事件监听是怎么实现的呢? 01 监听事件明细表 名...
3万字总结,Mysql优化之精髓
本文知识点较多,篇幅较长,请耐心学习 MySQL已经成为时下关系型数据库产品的中坚力量,备受互联网大厂的青睐,出门面试想进BAT,想拿高工资,不会点MySQL优化知识,拿offer的成功率会大大下降。 为什么要优化 系统的吞吐量瓶颈往往出现在数据库的访问速度上 随着应用程序的运行,数据库的中的数据会越来越多,处理时间会相应变慢 数据是存放在磁盘上的,读写速度无法和内存相比 如何优化 设计...
Python新型冠状病毒疫情数据自动爬取+统计+发送报告+数据屏幕(三)发送篇
今天介绍的项目是使用 Itchat 发送统计报告 项目功能设计: 定时爬取疫情数据存入Mysql 进行数据分析制作疫情报告 使用itchat给亲人朋友发送分析报告 基于Django做数据屏幕 使用Tableau做数据分析 来看看最终效果 目前已经完成,预计2月12日前更新 使用 itchat 发送数据统计报告 itchat 是一个基于 web微信的一个框架,但微信官方并不允许使用这...
作为程序员的我,大学四年一直自学,全靠这些实用工具和学习网站!
我本人因为高中沉迷于爱情,导致学业荒废,后来高考,毫无疑问进入了一所普普通通的大学,实在惭愧???? 我又是那么好强,现在学历不行,没办法改变的事情了,所以,进入大学开始,我就下定决心,一定要让自己掌握更多的技能,尤其选择了计算机这个行业,一定要多学习技术。 在进入大学学习不久后,我就认清了一个现实:我这个大学的整体教学质量和学习风气,真的一言难尽,懂的人自然知道怎么回事? 怎么办?我该如何更好的提升自...
粒子群算法求解物流配送路线问题(python)
1.Matlab实现粒子群算法的程序代码:https://www.cnblogs.com/kexinxin/p/9858664.html matlab代码求解函数最优值:https://blog.csdn.net/zyqblog/article/details/80829043 讲解通俗易懂,有数学实例的博文:https://blog.csdn.net/daaikuaichuan/article/...
教你如何编写第一个简单的爬虫
很多人知道爬虫,也很想利用爬虫去爬取自己想要的数据,那么爬虫到底怎么用呢?今天就教大家编写一个简单的爬虫。 下面以爬取笔者的个人博客网站为例获取第一篇文章的标题名称,教大家学会一个简单的爬虫。 第一步:获取页面 #!/usr/bin/python # coding: utf-8 import requests #引入包requests link = "http://www.santostang....
前端JS初级面试题二 (。•ˇ‸ˇ•。)老铁们!快来瞧瞧自己都会了么
1. 传统事件绑定和符合W3C标准的事件绑定有什么区别? 传统事件绑定 &lt;div onclick=""&gt;123&lt;/div&gt; div1.onclick = function(){}; &lt;button onmouseover=""&gt;&lt;/button&gt; 注意: 如果给同一个元素绑定了两次或多次相同类型的事件,那么后面的绑定会覆盖前面的绑定 (不支持DOM事...
相关热词 c# id读写器 c#俄罗斯方块源码 c# linq原理 c# 装箱有什么用 c#集合 复制 c# 一个字符串分组 c++和c#哪个就业率高 c# 批量动态创建控件 c# 模块和程序集的区别 c# gmap 截图
立即提问