关于深度学习loss function的问题。

图片说明
请问各位大佬,可以详细解答一下这个问题嘛?
提前谢谢大家

1个回答

lambda应该是超参数(hyper-paramater)而不是参数,损失函数的系数在整个训练中应该保持不变,然后去学习w(权重),让误差最小,如果损失函数的参数还在变,怎么去学。

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
keras阈值损失函数(Threshold loss function)怎么写呢?
![图片说明](https://img-ask.csdn.net/upload/201908/16/1565920611_335043.png)
在Spyder界面中使用tensorflow进行fashion_mnist数据集学习,结果loss为非数,并且准确率一直未变
1.建立了一个3个全连接层的神经网络; 2.代码如下: ``` import matplotlib as mpl import matplotlib.pyplot as plt #%matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time import tensorflow as tf from tensorflow import keras print(tf.__version__) print(sys.version_info) for module in mpl, np, sklearn, tf, keras: print(module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data() x_valid, x_train = x_train_all[:5000], x_train_all[5000:] y_valid, y_train = y_train_all[:5000], y_train_all[5000:] #tf.keras.models.Sequential model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape= [28,28])) model.add(keras.layers.Dense(300, activation="relu")) model.add(keras.layers.Dense(100, activation="relu")) model.add(keras.layers.Dense(10,activation="softmax")) ###sparse为最后输出为index类型,如果为one hot类型,则不需加sparse model.compile(loss = "sparse_categorical_crossentropy",optimizer = "sgd", metrics = ["accuracy"]) #model.layers #model.summary() history = model.fit(x_train, y_train, epochs=10, validation_data=(x_valid,y_valid)) ``` 3.输出结果: ``` runfile('F:/new/new world/deep learning/tensorflow/ex2/tf_keras_classification_model.py', wdir='F:/new/new world/deep learning/tensorflow/ex2') 2.0.0 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0) matplotlib 3.1.1 numpy 1.16.5 sklearn 0.21.3 tensorflow 2.0.0 tensorflow_core.keras 2.2.4-tf Train on 55000 samples, validate on 5000 samples Epoch 1/10 WARNING:tensorflow:Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: 55000/55000 [==============================] - 3s 58us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 2/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 3/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 4/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 5/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 6/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 7/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 8/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 9/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 10/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 ```
训练风格迁移模型时遇到一些无法解决的错误
# coding: utf-8 from __future__ import print_function import tensorflow as tf from nets import nets_factory from preprocessing import preprocessing_factory import utils import os slim = tf.contrib.slim def gram(layer): shape = tf.shape(layer) num_images = shape[0] width = shape[1] height = shape[2] num_filters = shape[3] filters = tf.reshape(layer, tf.stack([num_images, -1, num_filters])) grams = tf.matmul(filters, filters, transpose_a=True) / tf.to_float(width * height * num_filters) return grams def get_style_features(FLAGS): """ For the "style_image", the preprocessing step is: 1. Resize the shorter side to FLAGS.image_size 2. Apply central crop """ with tf.Graph().as_default(): network_fn = nets_factory.get_network_fn( FLAGS.loss_model, num_classes=1, is_training=False) image_preprocessing_fn, image_unprocessing_fn = preprocessing_factory.get_preprocessing( FLAGS.loss_model, is_training=False) # Get the style image data size = FLAGS.image_size img_bytes = tf.read_file(FLAGS.style_image) if FLAGS.style_image.lower().endswith('png'): image = tf.image.decode_png(img_bytes) else: image = tf.image.decode_jpeg(img_bytes) # image = _aspect_preserving_resize(image, size) # Add the batch dimension images = tf.expand_dims(image_preprocessing_fn(image, size, size), 0) # images = tf.stack([image_preprocessing_fn(image, size, size)]) _, endpoints_dict = network_fn(images, spatial_squeeze=False) features = [] for layer in FLAGS.style_layers: feature = endpoints_dict[layer] feature = tf.squeeze(gram(feature), [0]) # remove the batch dimension features.append(feature) with tf.Session() as sess: # Restore variables for loss network. init_func = utils._get_init_fn(FLAGS) init_func(sess) # Make sure the 'generated' directory is exists. if os.path.exists('generated') is False: os.makedirs('generated') # Indicate cropped style image path save_file = 'generated/target_style_' + FLAGS.naming + '.jpg' # Write preprocessed style image to indicated path with open(save_file, 'wb') as f: target_image = image_unprocessing_fn(images[0, :]) value = tf.image.encode_jpeg(tf.cast(target_image, tf.uint8)) f.write(sess.run(value)) tf.logging.info('Target style pattern is saved to: %s.' % save_file) # Return the features those layers are use for measuring style loss. return sess.run(features) def style_loss(endpoints_dict, style_features_t, style_layers): style_loss = 0 style_loss_summary = {} for style_gram, layer in zip(style_features_t, style_layers): generated_images, _ = tf.split(endpoints_dict[layer], 2, 0) size = tf.size(generated_images) layer_style_loss = tf.nn.l2_loss(gram(generated_images) - style_gram) * 2 / tf.to_float(size) style_loss_summary[layer] = layer_style_loss style_loss += layer_style_loss return style_loss, style_loss_summary def content_loss(endpoints_dict, content_layers): content_loss = 0 for layer in content_layers: generated_images, content_images = tf.split(endpoints_dict[layer], 2, 0) size = tf.size(generated_images) content_loss += tf.nn.l2_loss(generated_images - content_images) * 2 / tf.to_float(size) # remain the same as in the paper return content_loss def total_variation_loss(layer): shape = tf.shape(layer) height = shape[1] width = shape[2] y = tf.slice(layer, [0, 0, 0, 0], tf.stack([-1, height - 1, -1, -1])) - tf.slice(layer, [0, 1, 0, 0], [-1, -1, -1, -1]) x = tf.slice(layer, [0, 0, 0, 0], tf.stack([-1, -1, width - 1, -1])) - tf.slice(layer, [0, 0, 1, 0], [-1, -1, -1, -1]) loss = tf.nn.l2_loss(x) / tf.to_float(tf.size(x)) + tf.nn.l2_loss(y) / tf.to_float(tf.size(y)) return loss train.py from __future__ import print_function from __future__ import division import tensorflow as tf from nets import nets_factory from preprocessing import preprocessing_factory import reader import model import time import losses import utils import os import argparse slim = tf.contrib.slim def parse_args(): parser = argparse.ArgumentParser() parser.add_argument('-c', '--conf', default='conf/mosaic.yml', help='the path to the conf file') return parser.parse_args() def main(FLAGS): style_features_t = losses.get_style_features(FLAGS) # Make sure the training path exists. training_path = os.path.join(FLAGS.model_path, FLAGS.naming) if not(os.path.exists(training_path)): os.makedirs(training_path) with tf.Graph().as_default(): with tf.Session() as sess: """Build Network""" network_fn = nets_factory.get_network_fn( FLAGS.loss_model, num_classes=1, is_training=False) image_preprocessing_fn, image_unprocessing_fn = preprocessing_factory.get_preprocessing( FLAGS.loss_model, is_training=False) processed_images = reader.image(FLAGS.batch_size, FLAGS.image_size, FLAGS.image_size, 'F:\Anaconda3\7\train2014/', image_preprocessing_fn, epochs=FLAGS.epoch) generated = model.net(processed_images, training=True) processed_generated = [image_preprocessing_fn(image, FLAGS.image_size, FLAGS.image_size) for image in tf.unstack(generated, axis=0, num=FLAGS.batch_size) ] processed_generated = tf.stack(processed_generated) _, endpoints_dict = network_fn(tf.concat([processed_generated, processed_images], 0), spatial_squeeze=False) # Log the structure of loss network tf.logging.info('Loss network layers(You can define them in "content_layers" and "style_layers"):') for key in endpoints_dict: tf.logging.info(key) """Build Losses""" content_loss = losses.content_loss(endpoints_dict, FLAGS.content_layers) style_loss, style_loss_summary = losses.style_loss(endpoints_dict, style_features_t, FLAGS.style_layers) tv_loss = losses.total_variation_loss(generated) # use the unprocessed image loss = FLAGS.style_weight * style_loss + FLAGS.content_weight * content_loss + FLAGS.tv_weight * tv_loss # Add Summary for visualization in tensorboard. """Add Summary""" tf.summary.scalar('losses/content_loss', content_loss) tf.summary.scalar('losses/style_loss', style_loss) tf.summary.scalar('losses/regularizer_loss', tv_loss) tf.summary.scalar('weighted_losses/weighted_content_loss', content_loss * FLAGS.content_weight) tf.summary.scalar('weighted_losses/weighted_style_loss', style_loss * FLAGS.style_weight) tf.summary.scalar('weighted_losses/weighted_regularizer_loss', tv_loss * FLAGS.tv_weight) tf.summary.scalar('total_loss', loss) for layer in FLAGS.style_layers: tf.summary.scalar('style_losses/' + layer, style_loss_summary[layer]) tf.summary.image('generated', generated) # tf.image_summary('processed_generated', processed_generated) # May be better? tf.summary.image('origin', tf.stack([ image_unprocessing_fn(image) for image in tf.unstack(processed_images, axis=0, num=FLAGS.batch_size) ])) summary = tf.summary.merge_all() writer = tf.summary.FileWriter(training_path) """Prepare to Train""" global_step = tf.Variable(0, name="global_step", trainable=False) variable_to_train = [] for variable in tf.trainable_variables(): if not(variable.name.startswith(FLAGS.loss_model)): variable_to_train.append(variable) train_op = tf.train.AdamOptimizer(1e-3).minimize(loss, global_step=global_step, var_list=variable_to_train) variables_to_restore = [] for v in tf.global_variables(): if not(v.name.startswith(FLAGS.loss_model)): variables_to_restore.append(v) saver = tf.train.Saver(variables_to_restore, write_version=tf.train.SaverDef.V1) sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()]) # Restore variables for loss network. init_func = utils._get_init_fn(FLAGS) init_func(sess) # Restore variables for training model if the checkpoint file exists. last_file = tf.train.latest_checkpoint(training_path) if last_file: tf.logging.info('Restoring model from {}'.format(last_file)) saver.restore(sess, last_file) """Start Training""" coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) start_time = time.time() try: while not coord.should_stop(): _, loss_t, step = sess.run([train_op, loss, global_step]) elapsed_time = time.time() - start_time start_time = time.time() """logging""" # print(step) if step % 10 == 0: tf.logging.info('step: %d, total Loss %f, secs/step: %f' % (step, loss_t, elapsed_time)) """summary""" if step % 25 == 0: tf.logging.info('adding summary...') summary_str = sess.run(summary) writer.add_summary(summary_str, step) writer.flush() """checkpoint""" if step % 1000 == 0: saver.save(sess, os.path.join(training_path, 'fast-style-model.ckpt'), global_step=step) except tf.errors.OutOfRangeError: saver.save(sess, os.path.join(training_path, 'fast-style-model.ckpt-done')) tf.logging.info('Done training -- epoch limit reached') finally: coord.request_stop() coord.join(threads) if __name__ == '__main__': tf.logging.set_verbosity(tf.logging.INFO) args = parse_args() FLAGS = utils.read_conf_file(args.conf) main(FLAGS) ` ![图片说明](https://img-ask.csdn.net/upload/202003/08/1583658602_821912.png) 错误情况如图
Python:运行代码,报错BrokenPipeError: [Errno 32] Broken pipe该如何解决
运行环境:Anaconda+Python3.7+Spyder+Win10 试着运行 AlexNet 网络,一直出现BrokenPipeError: [Errno 32] Broken pipe 看其他帖子,试着修改torch.utils.data.DataLoader()函数时的 num_workers 参数,但是num_workers就为0 代码源自:https://github.com/jindongwang/transferlearning/tree/master/code/deep/finetune_AlexNet_ResNet 具体如下: ``` from __future__ import print_function import argparse import data_loader import numpy as np import torch import torch.nn as nn import torch.optim as optim import torchvision import time # Command setting parser = argparse.ArgumentParser(description='Finetune') parser.add_argument('-model', '-m', type=str, help='model name', default='resnet') parser.add_argument('-batchsize', '-b', type=int, help='batch size', default=64) parser.add_argument('-cuda', '-g', type=int, help='cuda id', default=0) parser.add_argument('-source', '-src', type=str, default='amazon') parser.add_argument('-target', '-tar', type=str, default='webcam') args = parser.parse_args() # Parameter setting DEVICE = torch.device('cuda:' + str(args.cuda) if torch.cuda.is_available() else 'cpu') N_CLASS = 31 LEARNING_RATE = 1e-4 BATCH_SIZE = {'src': int(args.batchsize), 'tar': int(args.batchsize)} N_EPOCH = 100 MOMENTUM = 0.9 DECAY = 5e-4 def load_model(name='alexnet'): if name == 'alexnet': model = torchvision.models.alexnet(pretrained=True) n_features = model.classifier[6].in_features fc = torch.nn.Linear(n_features, N_CLASS) model.classifier[6] = fc elif name == 'resnet': model = torchvision.models.resnet50(pretrained=True) n_features = model.fc.in_features fc = torch.nn.Linear(n_features, N_CLASS) model.fc = fc model.fc.weight.data.normal_(0, 0.005) model.fc.bias.data.fill_(0.1) return model def get_optimizer(model_name): learning_rate = LEARNING_RATE if model_name == 'alexnet': param_group = [ {'params': model.features.parameters(), 'lr': learning_rate}] for i in range(6): param_group += [{'params': model.classifier[i].parameters(), 'lr': learning_rate}] param_group += [{'params': model.classifier[6].parameters(), 'lr': learning_rate * 10}] elif model_name == 'resnet': param_group = [] for k, v in model.named_parameters(): if not k.__contains__('fc'): param_group += [{'params': v, 'lr': learning_rate}] else: param_group += [{'params': v, 'lr': learning_rate * 10}] optimizer = optim.SGD(param_group, momentum=MOMENTUM) return optimizer # Schedule learning rate def lr_schedule(optimizer, epoch): def lr_decay(LR, n_epoch, e): return LR / (1 + 10 * e / n_epoch) ** 0.75 for i in range(len(optimizer.param_groups)): if i < len(optimizer.param_groups) - 1: optimizer.param_groups[i]['lr'] = lr_decay( LEARNING_RATE, N_EPOCH, epoch) else: optimizer.param_groups[i]['lr'] = lr_decay( LEARNING_RATE, N_EPOCH, epoch) * 10 def finetune(model, dataloaders, optimizer): since = time.time() best_acc = 0.0 acc_hist = [] criterion = nn.CrossEntropyLoss() for epoch in range(1, N_EPOCH + 1): # lr_schedule(optimizer, epoch) print('Learning rate: {:.8f}'.format(optimizer.param_groups[0]['lr'])) print('Learning rate: {:.8f}'.format(optimizer.param_groups[-1]['lr'])) for phase in ['src', 'val', 'tar']: if phase == 'src': model.train() else: model.eval() total_loss, correct = 0, 0 for inputs, labels in dataloaders[phase]: inputs, labels = inputs.to(DEVICE), labels.to(DEVICE) optimizer.zero_grad() with torch.set_grad_enabled(phase == 'src'): outputs = model(inputs) loss = criterion(outputs, labels) preds = torch.max(outputs, 1)[1] if phase == 'src': loss.backward() optimizer.step() total_loss += loss.item() * inputs.size(0) correct += torch.sum(preds == labels.data) epoch_loss = total_loss / len(dataloaders[phase].dataset) epoch_acc = correct.double() / len(dataloaders[phase].dataset) acc_hist.append([epoch_loss, epoch_acc]) print('Epoch: [{:02d}/{:02d}]---{}, loss: {:.6f}, acc: {:.4f}'.format(epoch, N_EPOCH, phase, epoch_loss, epoch_acc)) if phase == 'tar' and epoch_acc > best_acc: best_acc = epoch_acc print() fname = 'finetune_result' + model_name + \ str(LEARNING_RATE) + str(args.source) + \ '-' + str(args.target) + '.csv' np.savetxt(fname, np.asarray(a=acc_hist, dtype=float), delimiter=',', fmt='%.4f') time_pass = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_pass // 60, time_pass % 60)) return model, best_acc, acc_hist if __name__ == '__main__': torch.manual_seed(10) # Load data root_dir = 'data/OFFICE31/' domain = {'src': str(args.source), 'tar': str(args.target)} dataloaders = {} dataloaders['tar'] = data_loader.load_data( root_dir, domain['tar'], BATCH_SIZE['tar'], 'tar')![图片说明](https://img-ask.csdn.net/upload/202003/25/1585136908_284909.png) dataloaders['src'], dataloaders['val'] = data_loader.load_train( root_dir, domain['src'], BATCH_SIZE['src'], 'src') print(len(dataloaders['src'].dataset), len(dataloaders['val'].dataset)) # Load model model_name = str(args.model) model = load_model(model_name).to(DEVICE) print('Source:{}, target:{}, model: {}'.format( domain['src'], domain['tar'], model_name)) optimizer = get_optimizer(model_name) model_best, best_acc, acc_hist = finetune(model, dataloaders, optimizer) print('{}Best acc: {}'.format('*' * 10, best_acc)) ``` 报错内容如下: ![图片说明](https://img-ask.csdn.net/upload/202003/25/1585136983_480180.png) ![图片说明](https://img-ask.csdn.net/upload/202003/25/1585136994_924063.png)
深度学习图片识别循环停止?
最近在跑深度学习的inceptionV3的时候偶尔会遇到一问题,就是代码在运行到某个时间点时,就停止迭代运算,不知道为什么? ![图片说明](https://img-ask.csdn.net/upload/201811/11/1541904283_461011.png) 上面图是个例子,假设运行到291的step的时候停止了,不在继续运算,但是CPU和GPU是满载的。 下面是代码: ``` # coding=utf-8 import tensorflow as tf import numpy as np import pdb import os from datetime import datetime import slim.inception_model as inception_v3 from create_tf_record import * import tensorflow.contrib.slim as slim labels_nums = 7 # 类别个数 batch_size = 64 # resize_height = 299 # 指定SSS存储图片高度 resize_width = 299 # 指定存储图片宽度 depths = 3 data_shape = [batch_size, resize_height, resize_width, depths] # 定义input_images为图片数据 input_images = tf.placeholder(dtype=tf.float32, shape=[None, resize_height, resize_width, depths], name='input') # 定义input_labels为labels数据 # input_labels = tf.placeholder(dtype=tf.int32, shape=[None], name='label') input_labels = tf.placeholder(dtype=tf.int32, shape=[None, labels_nums], name='label') # 定义dropout的概率 keep_prob = tf.placeholder(tf.float32, name='keep_prob') is_training = tf.placeholder(tf.bool, name='is_training') #config = tf.ConfigProto() #config = tf.ConfigProto() #config.gpu_options.allow_growth = True #tf.Session(config = config) #tf.Session(config=tf.ConfigProto(allow_growth=True)) def net_evaluation(sess, loss, accuracy, val_images_batch, val_labels_batch, val_nums): val_max_steps = int(val_nums / batch_size) val_losses = [] val_accs = [] for _ in range(val_max_steps): val_x, val_y = sess.run([val_images_batch, val_labels_batch]) # print('labels:',val_y) # val_loss = sess.run(loss, feed_dict={x: val_x, y: val_y, keep_prob: 1.0}) # val_acc = sess.run(accuracy,feed_dict={x: val_x, y: val_y, keep_prob: 1.0}) val_loss, val_acc = sess.run([loss, accuracy], feed_dict={input_images: val_x, input_labels: val_y, keep_prob: 1.0, is_training: False}) val_losses.append(val_loss) val_accs.append(val_acc) mean_loss = np.array(val_losses, dtype=np.float32).mean() mean_acc = np.array(val_accs, dtype=np.float32).mean() return mean_loss, mean_acc def step_train(train_op, loss, accuracy, train_images_batch, train_labels_batch, train_nums, train_log_step, val_images_batch, val_labels_batch, val_nums, val_log_step, snapshot_prefix, snapshot): ''' 循环迭代训练过程 :param train_op: 训练op :param loss: loss函数 :param accuracy: 准确率函数 :param train_images_batch: 训练images数据 :param train_labels_batch: 训练labels数据 :param train_nums: 总训练数据 :param train_log_step: 训练log显示间隔 :param val_images_batch: 验证images数据 :param val_labels_batch: 验证labels数据 :param val_nums: 总验证数据 :param val_log_step: 验证log显示间隔 :param snapshot_prefix: 模型保存的路径 :param snapshot: 模型保存间隔 :return: None ''' # 初始化 #init = tf.global_variables_initializer() saver = tf.train.Saver() max_acc = 0.0 #ckpt = tf.train.get_checkpoint_state('D:/can_test/inception v3/') #saver = tf.train.import_meta_graph(ckpt.model_checkpoint_path + '.meta') #tf.reset_default_graph() with tf.Session() as sess: #sess.run(tf.global_variables_initializer())#恢复训练用 #saver = tf.train.import_meta_graph('D://can_test/inception v3/best_models_2_0.7500.ckpt.meta')#恢复训练 #saver.restore(sess, tf.train.latest_checkpoint('D://can_test/inception v3/'))#恢复训练 sess.run(tf.global_variables_initializer()) sess.run(tf.local_variables_initializer()) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) for i in range(max_steps + 1): batch_input_images, batch_input_labels = sess.run([train_images_batch, train_labels_batch]) _, train_loss = sess.run([train_op, loss], feed_dict={input_images: batch_input_images, input_labels: batch_input_labels, keep_prob: 0.5, is_training: True}) # train测试(这里仅测试训练集的一个batch) if i % train_log_step == 0: train_acc = sess.run(accuracy, feed_dict={input_images: batch_input_images, input_labels: batch_input_labels, keep_prob: 1.0, is_training: False}) print( "%s: Step [%d] train Loss : %f, training accuracy : %g" % ( datetime.now(), i, train_loss, train_acc) ) # val测试(测试全部val数据) if i % val_log_step == 0: mean_loss, mean_acc = net_evaluation(sess, loss, accuracy, val_images_batch, val_labels_batch, val_nums) print( "%s: Step [%d] val Loss : %f, val accuracy : %g" % (datetime.now(), i, mean_loss, mean_acc) ) # 模型保存:每迭代snapshot次或者最后一次保存模型 if i == max_steps: print('-----save:{}-{}'.format(snapshot_prefix, i)) saver.save(sess, snapshot_prefix, global_step=i) # 保存val准确率最高的模型 if mean_acc > max_acc and mean_acc > 0.90: max_acc = mean_acc path = os.path.dirname(snapshot_prefix) best_models = os.path.join(path, 'best_models_{}_{:.4f}.ckpt'.format(i, max_acc)) print('------save:{}'.format(best_models)) saver.save(sess, best_models) coord.request_stop() coord.join(threads) def train(train_record_file, train_log_step, train_param, val_record_file, val_log_step, labels_nums, data_shape, snapshot, snapshot_prefix): ''' :param train_record_file: 训练的tfrecord文件 :param train_log_step: 显示训练过程log信息间隔 :param train_param: train参数 :param val_record_file: 验证的tfrecord文件 :param val_log_step: 显示验证过程log信息间隔 :param val_param: val参数 :param labels_nums: labels数 :param data_shape: 输入数据shape :param snapshot: 保存模型间隔 :param snapshot_prefix: 保存模型文件的前缀名 :return: ''' [base_lr, max_steps] = train_param [batch_size, resize_height, resize_width, depths] = data_shape # 获得训练和测试的样本数 train_nums = get_example_nums(train_record_file) val_nums = get_example_nums(val_record_file) print('train nums:%d,val nums:%d' % (train_nums, val_nums)) # 从record中读取图片和labels数据 # train数据,训练数据一般要求打乱顺序shuffle=True train_images, train_labels = read_records(train_record_file, resize_height, resize_width, type='normalization') train_images_batch, train_labels_batch = get_batch_images(train_images, train_labels, batch_size=batch_size, labels_nums=labels_nums, one_hot=True, shuffle=True) # val数据,验证数据可以不需要打乱数据 val_images, val_labels = read_records(val_record_file, resize_height, resize_width, type='normalization') val_images_batch, val_labels_batch = get_batch_images(val_images, val_labels, batch_size=batch_size, labels_nums=labels_nums, one_hot=True, shuffle=False) # Define the model: with slim.arg_scope(inception_v3.inception_v3_arg_scope()): out, end_points = inception_v3.inception_v3(inputs=input_images, num_classes=labels_nums, dropout_keep_prob=keep_prob, is_training=is_training) # Specify the loss function: tf.losses定义的loss函数都会自动添加到loss函数,不需要add_loss()了 tf.losses.softmax_cross_entropy(onehot_labels=input_labels, logits=out) # 添加交叉熵损失loss=1.6 # slim.losses.add_loss(my_loss) loss = tf.losses.get_total_loss(add_regularization_losses=True) # 添加正则化损失loss=2.2 accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(out, 1), tf.argmax(input_labels, 1)), tf.float32)) # Specify the optimization scheme: optimizer = tf.train.GradientDescentOptimizer(learning_rate=base_lr) # global_step = tf.Variable(0, trainable=False) # learning_rate = tf.train.exponential_decay(0.05, global_step, 150, 0.9) # # optimizer = tf.train.MomentumOptimizer(learning_rate, 0.9) # # train_tensor = optimizer.minimize(loss, global_step) # train_op = slim.learning.create_train_op(loss, optimizer,global_step=global_step) # 在定义训练的时候, 注意到我们使用了`batch_norm`层时,需要更新每一层的`average`和`variance`参数, # 更新的过程不包含在正常的训练过程中, 需要我们去手动像下面这样更新 # 通过`tf.get_collection`获得所有需要更新的`op` update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # 使用`tensorflow`的控制流, 先执行更新算子, 再执行训练 with tf.control_dependencies(update_ops): # create_train_op that ensures that when we evaluate it to get the loss, # the update_ops are done and the gradient updates are computed. # train_op = slim.learning.create_train_op(total_loss=loss,optimizer=optimizer) train_op = slim.learning.create_train_op(total_loss=loss, optimizer=optimizer) # 循环迭代过程 step_train(train_op, loss, accuracy, train_images_batch, train_labels_batch, train_nums, train_log_step, val_images_batch, val_labels_batch, val_nums, val_log_step, snapshot_prefix, snapshot) if __name__ == '__main__': train_record_file = '/home/lab/new_jeremie/train.tfrecords' val_record_file = '/home/lab/new_jeremie/val.tfrecords' #train_record_file = 'D://cancer_v2/data/cancer/train.tfrecords' #val_record_file = 'D://val.tfrecords' train_log_step = 1 base_lr = 0.01 # 学习率 max_steps = 100000 # 迭代次数 train_param = [base_lr, max_steps] val_log_step = 1 snapshot = 2000 # 保存文件间隔 snapshot_prefix = './v3model.ckpt' train(train_record_file=train_record_file, train_log_step=train_log_step, train_param=train_param, val_record_file=val_record_file, val_log_step=val_log_step, #val_log_step=val_log_step, labels_nums=labels_nums, data_shape=data_shape, snapshot=snapshot, snapshot_prefix=snapshot_prefix) ```
修改的SSD—Tensorflow 版本在训练的时候遇到loss输入维度不一致
目前在学习目标检测识别的方向。 自己参考了一些论文 对原版的SSD进行了一些改动工作 前面的网络模型部分已经修改完成且不报错。 但是在进行训练操作的时候会出现 ’ValueError: Dimension 0 in both shapes must be equal, but are 233920 and 251392. Shapes are [233920] and [251392]. for 'ssd_losses/Select' (op: 'Select') with input shapes: [251392], [233920], [251392]. ‘ ‘两个形状中的尺寸0必须相等,但分别为233920和251392。形状有[233920]和[251392]。对于输入形状为[251392]、[233920]、[251392]的''ssd_losses/Select' (op: 'Select') ![图片说明](https://img-ask.csdn.net/upload/201904/06/1554539638_631515.png) ![图片说明](https://img-ask.csdn.net/upload/201904/06/1554539651_430990.png) # SSD loss function. # =========================================================================== # def ssd_losses(logits, localisations, gclasses, glocalisations, gscores, match_threshold=0.5, negative_ratio=3., alpha=1., label_smoothing=0., device='/cpu:0', scope=None): with tf.name_scope(scope, 'ssd_losses'): lshape = tfe.get_shape(logits[0], 5) num_classes = lshape[-1] batch_size = lshape[0] # Flatten out all vectors! flogits = [] fgclasses = [] fgscores = [] flocalisations = [] fglocalisations = [] for i in range(len(logits)): flogits.append(tf.reshape(logits[i], [-1, num_classes])) fgclasses.append(tf.reshape(gclasses[i], [-1])) fgscores.append(tf.reshape(gscores[i], [-1])) flocalisations.append(tf.reshape(localisations[i], [-1, 4])) fglocalisations.append(tf.reshape(glocalisations[i], [-1, 4])) # And concat the crap! logits = tf.concat(flogits, axis=0) gclasses = tf.concat(fgclasses, axis=0) gscores = tf.concat(fgscores, axis=0) localisations = tf.concat(flocalisations, axis=0) glocalisations = tf.concat(fglocalisations, axis=0) dtype = logits.dtype # Compute positive matching mask... pmask = gscores > match_threshold fpmask = tf.cast(pmask, dtype) n_positives = tf.reduce_sum(fpmask) # Hard negative mining... no_classes = tf.cast(pmask, tf.int32) predictions = slim.softmax(logits) nmask = tf.logical_and(tf.logical_not(pmask), gscores > -0.5) fnmask = tf.cast(nmask, dtype) nvalues = tf.where(nmask, predictions[:, 0], 1. - fnmask) nvalues_flat = tf.reshape(nvalues, [-1]) # Number of negative entries to select. max_neg_entries = tf.cast(tf.reduce_sum(fnmask), tf.int32) n_neg = tf.cast(negative_ratio * n_positives, tf.int32) + batch_size n_neg = tf.minimum(n_neg, max_neg_entries) val, idxes = tf.nn.top_k(-nvalues_flat, k=n_neg) max_hard_pred = -val[-1] # Final negative mask. nmask = tf.logical_and(nmask, nvalues < max_hard_pred) fnmask = tf.cast(nmask, dtype) # Add cross-entropy loss. with tf.name_scope('cross_entropy_pos'): loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=gclasses) loss = tf.div(tf.reduce_sum(loss * fpmask), batch_size, name='value') tf.losses.add_loss(loss) with tf.name_scope('cross_entropy_neg'): loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=no_classes) loss = tf.div(tf.reduce_sum(loss * fnmask), batch_size, name='value') tf.losses.add_loss(loss) # Add localization loss: smooth L1, L2, ... with tf.name_scope('localization'): # Weights Tensor: positive mask + random negative. weights = tf.expand_dims(alpha * fpmask, axis=-1) loss = custom_layers.abs_smooth(localisations - glocalisations) loss = tf.div(tf.reduce_sum(loss * weights), batch_size, name='value') tf.losses.add_loss(loss) ``` ``` 研究了一段时间的源码 (因为只是SSD-Tensorflow-Master中的ssd_vgg_300.py中定义网络结构的那部分做了修改 ,loss函数代码部分并没有进行改动)所以没所到错误所在,网上也找不到相关的解决方案。 希望大神能够帮忙解答 感激不尽~
tensorflow载入训练好的模型进行预测,同一张图片预测的结果却不一样????
最近在跑deeplabv1,在测试代码的时候,跑通了训练程序,但是用训练好的模型进行与测试却发现相同的图片预测的结果不一样??请问有大神知道怎么回事吗? 用的是saver.restore()方法载入模型。代码如下: ``` def main(): """Create the model and start the inference process.""" args = get_arguments() # Prepare image. img = tf.image.decode_jpeg(tf.read_file(args.img_path), channels=3) # Convert RGB to BGR. img_r, img_g, img_b = tf.split(value=img, num_or_size_splits=3, axis=2) img = tf.cast(tf.concat(axis=2, values=[img_b, img_g, img_r]), dtype=tf.float32) # Extract mean. img -= IMG_MEAN # Create network. net = DeepLabLFOVModel() # Which variables to load. trainable = tf.trainable_variables() # Predictions. pred = net.preds(tf.expand_dims(img, dim=0)) # Set up TF session and initialize variables. config = tf.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config) #init = tf.global_variables_initializer() sess.run(tf.global_variables_initializer()) # Load weights. saver = tf.train.Saver(var_list=trainable) load(saver, sess, args.model_weights) # Perform inference. preds = sess.run([pred]) print(preds) if not os.path.exists(args.save_dir): os.makedirs(args.save_dir) msk = decode_labels(np.array(preds)[0, 0, :, :, 0]) im = Image.fromarray(msk) im.save(args.save_dir + 'mask1.png') print('The output file has been saved to {}'.format( args.save_dir + 'mask.png')) if __name__ == '__main__': main() ``` 其中load是 ``` def load(saver, sess, ckpt_path): '''Load trained weights. Args: saver: TensorFlow saver object. sess: TensorFlow session. ckpt_path: path to checkpoint file with parameters. ''' ckpt = tf.train.get_checkpoint_state(ckpt_path) if ckpt and ckpt.model_checkpoint_path: saver.restore(sess, ckpt.model_checkpoint_path) print("Restored model parameters from {}".format(ckpt_path)) ``` DeepLabLFOVMode类如下: ``` class DeepLabLFOVModel(object): """DeepLab-LargeFOV model with atrous convolution and bilinear upsampling. This class implements a multi-layer convolutional neural network for semantic image segmentation task. This is the same as the model described in this paper: https://arxiv.org/abs/1412.7062 - please look there for details. """ def __init__(self, weights_path=None): """Create the model. Args: weights_path: the path to the cpkt file with dictionary of weights from .caffemodel. """ self.variables = self._create_variables(weights_path) def _create_variables(self, weights_path): """Create all variables used by the network. This allows to share them between multiple calls to the loss function. Args: weights_path: the path to the ckpt file with dictionary of weights from .caffemodel. If none, initialise all variables randomly. Returns: A dictionary with all variables. """ var = list() index = 0 if weights_path is not None: with open(weights_path, "rb") as f: weights = cPickle.load(f) # Load pre-trained weights. for name, shape in net_skeleton: var.append(tf.Variable(weights[name], name=name)) del weights else: # Initialise all weights randomly with the Xavier scheme, # and # all biases to 0's. for name, shape in net_skeleton: if "/w" in name: # Weight filter. w = create_variable(name, list(shape)) var.append(w) else: b = create_bias_variable(name, list(shape)) var.append(b) return var def _create_network(self, input_batch, keep_prob): """Construct DeepLab-LargeFOV network. Args: input_batch: batch of pre-processed images. keep_prob: probability of keeping neurons intact. Returns: A downsampled segmentation mask. """ current = input_batch v_idx = 0 # Index variable. # Last block is the classification layer. for b_idx in xrange(len(dilations) - 1): for l_idx, dilation in enumerate(dilations[b_idx]): w = self.variables[v_idx * 2] b = self.variables[v_idx * 2 + 1] if dilation == 1: conv = tf.nn.conv2d(current, w, strides=[ 1, 1, 1, 1], padding='SAME') else: conv = tf.nn.atrous_conv2d( current, w, dilation, padding='SAME') current = tf.nn.relu(tf.nn.bias_add(conv, b)) v_idx += 1 # Optional pooling and dropout after each block. if b_idx < 3: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 2, 2, 1], padding='SAME') elif b_idx == 3: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') elif b_idx == 4: current = tf.nn.max_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') current = tf.nn.avg_pool(current, ksize=[1, ks, ks, 1], strides=[1, 1, 1, 1], padding='SAME') elif b_idx <= 6: current = tf.nn.dropout(current, keep_prob=keep_prob) # Classification layer; no ReLU. # w = self.variables[v_idx * 2] w = create_variable(name='w', shape=[1, 1, 1024, n_classes]) # b = self.variables[v_idx * 2 + 1] b = create_bias_variable(name='b', shape=[n_classes]) conv = tf.nn.conv2d(current, w, strides=[1, 1, 1, 1], padding='SAME') current = tf.nn.bias_add(conv, b) return current def prepare_label(self, input_batch, new_size): """Resize masks and perform one-hot encoding. Args: input_batch: input tensor of shape [batch_size H W 1]. new_size: a tensor with new height and width. Returns: Outputs a tensor of shape [batch_size h w 18] with last dimension comprised of 0's and 1's only. """ with tf.name_scope('label_encode'): # As labels are integer numbers, need to use NN interp. input_batch = tf.image.resize_nearest_neighbor( input_batch, new_size) # Reducing the channel dimension. input_batch = tf.squeeze(input_batch, squeeze_dims=[3]) input_batch = tf.one_hot(input_batch, depth=n_classes) return input_batch def preds(self, input_batch): """Create the network and run inference on the input batch. Args: input_batch: batch of pre-processed images. Returns: Argmax over the predictions of the network of the same shape as the input. """ raw_output = self._create_network( tf.cast(input_batch, tf.float32), keep_prob=tf.constant(1.0)) raw_output = tf.image.resize_bilinear( raw_output, tf.shape(input_batch)[1:3, ]) raw_output = tf.argmax(raw_output, dimension=3) raw_output = tf.expand_dims(raw_output, dim=3) # Create 4D-tensor. return tf.cast(raw_output, tf.uint8) def loss(self, img_batch, label_batch): """Create the network, run inference on the input batch and compute loss. Args: input_batch: batch of pre-processed images. Returns: Pixel-wise softmax loss. """ raw_output = self._create_network( tf.cast(img_batch, tf.float32), keep_prob=tf.constant(0.5)) prediction = tf.reshape(raw_output, [-1, n_classes]) # Need to resize labels and convert using one-hot encoding. label_batch = self.prepare_label( label_batch, tf.stack(raw_output.get_shape()[1:3])) gt = tf.reshape(label_batch, [-1, n_classes]) # Pixel-wise softmax loss. loss = tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=gt) reduced_loss = tf.reduce_mean(loss) return reduced_loss ``` 按理说载入模型应该没有问题,可是不知道为什么结果却不一样? 图片:![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810836_83106.jpg) ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810850_924663.png) 预测的结果: ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810884_985680.png) ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573810904_577649.png) 两次结果不一样,与保存的模型算出来的结果也不一样。 我用的是GitHub上这个人的代码: https://github.com/minar09/DeepLab-LFOV-TensorFlow 急急急,请问有大神知道吗???
keras 训练网络时出现ValueError
rt 使用keras中的model.fit函数进行训练时出现错误:ValueError: None values not supported. 错误信息如下: ``` File "C:/Users/Desktop/MNISTpractice/mnist.py", line 93, in <module> model.fit(x_train,y_train, epochs=2, callbacks=callback_list,validation_data=(x_val,y_val)) File "C:\Anaconda3\lib\site-packages\keras\engine\training.py", line 1575, in fit self._make_train_function() File "C:\Anaconda3\lib\site-packages\keras\engine\training.py", line 960, in _make_train_function loss=self.total_loss) File "C:\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper return func(*args, **kwargs) File "C:\Anaconda3\lib\site-packages\keras\optimizers.py", line 432, in get_updates m_t = (self.beta_1 * m) + (1. - self.beta_1) * g File "C:\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py", line 820, in binary_op_wrapper y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y") File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 639, in convert_to_tensor as_ref=False) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 704, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 113, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 102, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape)) File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 360, in make_tensor_proto raise ValueError("None values not supported.") ValueError: None values not supported. ```
Alexnet分类问题,程序输入不匹配
用Alexnet网络做一个二分类问题,输入的图片也是227乘227的彩图。遇到了如下的问题![说是形状不匹配图片说明](https://img-ask.csdn.net/upload/201704/28/1493379079_9640.jpg)也不知道怎么解决,求大神帮忙 from __future__ import division, print_function, absolute_import import os import random from PIL import Image import numpy as np import tflearn from tflearn.layers.core import input_data, dropout, fully_connected from tflearn.layers.conv import conv_2d, max_pool_2d, upsample_2d from tflearn.layers.normalization import local_response_normalization from tflearn.layers.estimator import regression #import tflearn.datasets.oxflower17 as oxflower17 #X, Y = oxflower17.load_data(one_hot=True, resize_pics=(227, 227)) np.random.seed(170) def load_data(DataDir): data = np.empty((170,227,227,3),dtype="float32") #37是图片个数,800*800为图片大小,3是图片通道数 label = np.empty((170,),dtype="int") imgs = os.listdir(DataDir) num = len(imgs) for i in range(num): img = Image.open(DataDir+imgs[i]) arr = np.asarray(img,dtype="float32") data[i,:,:,:] = arr if i<53: label[i] = int(0) #o是无缺陷类,共170张图,第0-52张为无缺陷类。 else: label[i] = int(1) data /= np.max(data) #这两行是数据归一化,不用管 data -= np.mean(data) return data,label data,label=load_data('C:/Users/Administrator/Desktop/cnntest/picture/') index = [i for i in range(len(data))] random.shuffle(index) #之前做标签时,数据是按类排的,这边直接打乱顺序。所以标签还是一一对应的。 data = data[index] label = label[index] (TrainData,TestData) = (data[0:119],data[120:]) #traindata包括了两类数据,不用分开来输入。7:3训练集:预测集 (TrainLabel,TestLabel) = (label[0:119],label[120:]) # Building 'AlexNet' network = input_data(shape=[None, 227, 227, 3]) network = conv_2d(network, 96, 11, strides=4, activation='relu') #96为滤波器个数,11为滤波器大小 network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = conv_2d(network, 256, 5, activation='relu', group=2) network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = conv_2d(network, 384, 3, activation='relu') network = conv_2d(network, 384, 3, activation='relu') network = conv_2d(network, 256, 3, activation='relu') network = max_pool_2d(network, 3, strides=2) network = local_response_normalization(network) network = upsample_2d(network,2,name='upsample') network = fully_connected(network, 4096, activation='relu') network = dropout(network, 0.5) network = fully_connected(network, 4096, activation='relu') network = dropout(network, 0.5) #net = tflearn.global_avg_pool(net) network = fully_connected(network, 2, activation='softmax') network = regression(network, optimizer='momentum', loss='categorical_crossentropy', learning_rate=0.001) # Training print('Training ------------') model = tflearn.DNN(network, checkpoint_path='model_alexnet', max_checkpoints=1, tensorboard_verbose=2) model.fit(TrainData,TrainLabel, n_epoch=5, validation_set=0.1, shuffle=True, show_metric=True, batch_size=64, snapshot_step=200, snapshot_epoch=False, run_id='CNNPOTATO') model.save('CNNPOTATO.model') model.load('CNNPOTATO.model') print('\nTesting ------------') # Evaluate the model with the metrics we defined earlier loss, accuracy = model.evaluate(TestData, TestLabel) print('\ntest loss: ', loss) print('\ntest accuracy: ', accuracy) #print(model.predict([Y[1]])) ``` ```
第一次用matplotlib.pyplot,按着视频上把代码敲出来,但是我就是跑不出那根拟合曲线,大哥们帮忙看一下
为什么我的没有那个红色的线呢? ``` import tensorflow as tf import numpy as np import matplotlib.pyplot as plt def add_layer(inputs,in_size,out_size,activation_function=None): Weights = tf.Variable(tf.random_normal([in_size,out_size])) biases = tf.Variable(tf.zeros([1,out_size])+0.1) Wx_plus_b = tf.matmul(inputs,Weights)+biases if activation_function is None: outputs = Wx_plus_b else: outputs = activation_function(Wx_plus_b) return outputs x_data = np.linspace(-1,1,300)[:,np.newaxis] noise = np.random.normal(0,0.05,x_data.shape) y_data = np.square(x_data)-0.5 + noise xs = tf.placeholder(tf.float32,[None,1]) ys = tf.placeholder(tf.float32,[None,1]) l1 = add_layer (xs,1,10,activation_function= tf.nn.relu) prediction = add_layer(l1,10,1,activation_function= None) loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices= [1])) train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.scatter(x_data,y_data) plt.ion() plt.show(block=False) #block=False for i in range (1000): sess.run(train_step,feed_dict={xs:x_data,ys:y_data}) if i % 50 == 0: #print(sess.run(loss,feed_dict={xs:x_data,ys:y_data})) try: ax.lines.remove(lines[0]) except Exception: pass prediction_value = sess.run(prediction,feed_dict={xs:x_data}) lines = ax.plot(x_data,prediction_value,'y-',lw=10) plt.pause(0.1) ``` ![图片说明](https://img-ask.csdn.net/upload/201905/18/1558162247_285456.jpg)
模型分别在mac和windows服务器上跑,准确率相差60%多!
这是一个用cnn做文本分类的一个模型,我在自己的mac上跑准确率有90%,但是放到windows服务器上准确率竟然只有25%,不知道是什么原因? from __future__ import print_function import numpy as np from keras.utils import np_utils from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences import pandas as pd import os from keras import backend as K print('Loading Dict') embeddings_index = {} f = open(os.path.join( 'glove.6B.100d.txt')) for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs f.close() print('Loading dataset') tmp=pd.read_csv('train.csv') train_X=np.array(tmp.iloc[:,2]).astype('str') train_y=np.array(tmp.iloc[:,0]).astype('int16') train_y_ohe = np_utils.to_categorical(train_y) del tmp tmp=pd.read_csv('test.csv') test_X=np.array(tmp.iloc[:,2]).astype('str') test_y=np.array(tmp.iloc[:,0]).astype('int16') test_y_ohe = np_utils.to_categorical(test_y) del tmp train_y_ohe=train_y_ohe.astype('float32') test_y_ohe=test_y_ohe.astype('float32') X=np.append(train_X,test_X) print('Tokening') t = Tokenizer() t.fit_on_texts(X) vocab_size = len(t.word_index) + 1 # integer encode the documents encoded_X = t.texts_to_sequences(X) # pad documents to a max length of x words max_length = 50 padded_X = pad_sequences(encoded_X, maxlen=max_length, padding='post') embedding_matrix = np.zeros((vocab_size, 100)).astype('float32') for word, i in t.word_index.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector padded_X_train=pad_sequences(encoded_X[0:119999],maxlen=max_length, padding='post') padded_X_test=pad_sequences(encoded_X[119999:127598],maxlen=max_length, padding='post') padded_X_test=padded_X_test.astype('float32') padded_X_train=padded_X_train.astype('float32') print('Estabilish model') from keras.models import Model from keras.layers import Dense,Embedding,Convolution1D,concatenate,Flatten,Input,MaxPooling1D,Dropout,Merge from keras.callbacks import TensorBoard K.clear_session() x=Input(shape=(50,),dtype='float32') embed=Embedding(input_dim=vocab_size,output_dim=100,weights=[embedding_matrix],input_length=max_length)(x) cnn1=Convolution1D(128,9,activation='relu',padding='same',strides=1)(embed) cnn1=MaxPooling1D(5)(cnn1) cnn2=Convolution1D(128,6,activation='relu',padding='same',strides=1)(embed) cnn2=MaxPooling1D(5)(cnn2) cnn3=Convolution1D(128,3,activation='relu',padding='same',strides=1)(embed) cnn3=MaxPooling1D(5)(cnn3) cnn=concatenate([cnn1,cnn2,cnn3]) flat=Flatten()(cnn) drop=Dropout(0.1)(flat) y=Dense(5,activation='softmax')(drop) model=Model(inputs=x,outputs=y) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) tensorboard=TensorBoard(log_dir='./logs',write_graph=True,write_grads=True,histogram_freq=True) model.fit(padded_X_train, train_y_ohe, epochs=5, batch_size=10000, verbose=1,callbacks=[tensorboard],validation_data=[padded_X_test,test_y_ohe]) '''pred0=model.predict_classes(padded_X,verbose=0) acc_train=np.sum(train_y==pred0,axis=0)/train_X.shape[0]'''
用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神
用TensorFlow 训练mask rcnn时,总是在执行训练语句时报错,进行不下去了,求大神 执行语句是: ``` python model_main.py --model_dir=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training --pipeline_config_path=C:/Users/zoyiJiang/Desktop/mask_rcnn_test-master/training/mask_rcnn_inception_v2_coco.config ``` 报错信息如下: ``` WARNING:tensorflow:Forced number of epochs for all eval validations to be 1. WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1. WARNING:tensorflow:Estimator's model_fn (<function create_model_fn.<locals>.model_fn at 0x000001C1EA335C80>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards. Traceback (most recent call last): File "model_main.py", line 109, in <module> tf.app.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run _sys.exit(main(argv)) File "model_main.py", line 105, in main tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0]) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 439, in train_and_evaluate executor.run() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 518, in run self.run_local() File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\training.py", line 650, in run_local hooks=train_hooks) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 363, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 843, in _train_model return self._train_model_default(input_fn, hooks, saving_listeners) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 853, in _train_model_default input_fn, model_fn_lib.ModeKeys.TRAIN)) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 691, in _get_features_and_labels_from_input_fn result = self._call_input_fn(input_fn, mode) File "E:\Python3.6\lib\site-packages\tensorflow\python\estimator\estimator.py", line 798, in _call_input_fn return input_fn(**kwargs) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 525, in _train_input_fn batch_size=params['batch_size'] if params else train_config.batch_size) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 149, in build dataset = data_map_fn(process_fn, num_parallel_calls=num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 853, in map return ParallelMapDataset(self, map_func, num_parallel_calls) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1870, in __init__ super(ParallelMapDataset, self).__init__(input_dataset, map_func) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1839, in __init__ self._map_func.add_to_graph(ops.get_default_graph()) File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 484, in add_to_graph self._create_definition_if_needed() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 319, in _create_definition_if_needed self._create_definition_if_needed_impl() File "E:\Python3.6\lib\site-packages\tensorflow\python\framework\function.py", line 336, in _create_definition_if_needed_impl outputs = self._func(*inputs) File "E:\Python3.6\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 1804, in tf_map_func ret = map_func(nested_args) File "D:\Tensorflow\tf\models\research\object_detection\builders\dataset_builder.py", line 130, in process_fn processed_tensors = transform_input_data_fn(processed_tensors) File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 515, in transform_and_pad_input_data_fn tensor_dict=transform_data_fn(tensor_dict), File "D:\Tensorflow\tf\models\research\object_detection\inputs.py", line 129, in transform_input_data tf.expand_dims(tf.to_float(image), axis=0)) File "D:\Tensorflow\tf\models\research\object_detection\meta_architectures\faster_rcnn_meta_arch.py", line 543, in preprocess parallel_iterations=self._parallel_iterations) File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in static_or_dynamic_map_fn outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\utils\shape_utils.py", line 237, in <listcomp> outputs = [fn(arg) for arg in tf.unstack(elems)] File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2264, in resize_to_range lambda: _resize_portrait_image(image)) File "E:\Python3.6\lib\site-packages\tensorflow\python\util\deprecation.py", line 432, in new_func return func(*args, **kwargs) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2063, in cond orig_res_t, res_t = context_t.BuildCondBranch(true_fn) File "E:\Python3.6\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 1913, in BuildCondBranch original_result = fn() File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2263, in <lambda> lambda: _resize_landscape_image(image), File "D:\Tensorflow\tf\models\research\object_detection\core\preprocessor.py", line 2245, in _resize_landscape_image align_corners=align_corners, preserve_aspect_ratio=True) TypeError: resize_images() got an unexpected keyword argument 'preserve_aspect_ratio' ``` 根据提示的最后一句,是说没有一个有效参数 我用的是TensorFlow1.8 python3.6,下载的最新的TensorFlow-models-master
《MySQL 性能优化》之理解 MySQL 体系结构
本文介绍 MySQL 的体系结构,包括物理结构、逻辑结构以及插件式存储引擎。
程序员请照顾好自己,周末病魔差点一套带走我。
程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。
卸载 x 雷某度!GitHub 标星 1.5w+,从此我只用这款全能高速下载工具!
作者 | Rocky0429 来源 | Python空间 大家好,我是 Rocky0429,一个喜欢在网上收集各种资源的蒟蒻… 网上资源眼花缭乱,下载的方式也同样千奇百怪,比如 BT 下载,磁力链接,网盘资源等等等等,下个资源可真不容易,不一样的方式要用不同的下载软件,因此某比较有名的 x 雷和某度网盘成了我经常使用的工具。 作为一个没有钱的穷鬼,某度网盘几十 kb 的下载速度让我...
讲真,这两个IDE插件,可以让你写出质量杠杠的代码
周末躺在床上看《拯救大兵瑞恩》 周末在闲逛的时候,发现了两个优秀的 IDE 插件,据说可以提高代码的质量,我就安装了一下,试了试以后发现,确实很不错,就推荐给大家。 01、Alibaba Java 代码规范插件 《阿里巴巴 Java 开发手册》,相信大家都不会感到陌生,其 IDEA 插件的下载次数据说达到了 80 万次,我今天又贡献了一次。嘿嘿。 该项目的插件地址: https://github....
为什么猝死的都是程序员,基本上不见产品经理猝死呢?
相信大家时不时听到程序员猝死的消息,但是基本上听不到产品经理猝死的消息,这是为什么呢? 我们先百度搜一下:程序员猝死,出现将近700多万条搜索结果: 搜索一下:产品经理猝死,只有400万条的搜索结果,从搜索结果数量上来看,程序员猝死的搜索结果就比产品经理猝死的搜索结果高了一倍,而且从下图可以看到,首页里面的五条搜索结果,其实只有两条才是符合条件。 所以程序员猝死的概率真的比产品经理大,并不是错...
害怕面试被问HashMap?这一篇就搞定了!
声明:本文以jdk1.8为主! 搞定HashMap 作为一个Java从业者,面试的时候肯定会被问到过HashMap,因为对于HashMap来说,可以说是Java集合中的精髓了,如果你觉得自己对它掌握的还不够好,我想今天这篇文章会非常适合你,至少,看了今天这篇文章,以后不怕面试被问HashMap了 其实在我学习HashMap的过程中,我个人觉得HashMap还是挺复杂的,如果真的想把它搞得明明白...
毕业5年,我问遍了身边的大佬,总结了他们的学习方法
我问了身边10个大佬,总结了他们的学习方法,原来成功都是有迹可循的。
python爬取百部电影数据,我分析出了一个残酷的真相
2019年就这么匆匆过去了,就在前几天国家电影局发布了2019年中国电影市场数据,数据显示去年总票房为642.66亿元,同比增长5.4%;国产电影总票房411.75亿元,同比增长8.65%,市场占比 64.07%;城市院线观影人次17.27亿,同比增长0.64%。 看上去似乎是一片大好对不对?不过作为一名严谨求实的数据分析师,我从官方数据中看出了一点端倪:国产票房增幅都已经高达8.65%了,为什...
推荐10个堪称神器的学习网站
每天都会收到很多读者的私信,问我:“二哥,有什么推荐的学习网站吗?最近很浮躁,手头的一些网站都看烦了,想看看二哥这里有什么新鲜货。” 今天一早做了个恶梦,梦到被老板辞退了。虽然说在我们公司,只有我辞退老板的份,没有老板辞退我这一说,但是还是被吓得 4 点多都起来了。(主要是因为我掌握着公司所有的核心源码,哈哈哈) 既然 4 点多起来,就得好好利用起来。于是我就挑选了 10 个堪称神器的学习网站,推...
这些软件太强了,Windows必装!尤其程序员!
Windows可谓是大多数人的生产力工具,集娱乐办公于一体,虽然在程序员这个群体中都说苹果是信仰,但是大部分不都是从Windows过来的,而且现在依然有很多的程序员用Windows。 所以,今天我就把我私藏的Windows必装的软件分享给大家,如果有一个你没有用过甚至没有听过,那你就赚了????,这可都是提升你幸福感的高效率生产力工具哦! 走起!???? NO、1 ScreenToGif 屏幕,摄像头和白板...
阿里面试,面试官没想到一个ArrayList,我都能跟他扯半小时
我是真的没想到,面试官会这样问我ArrayList。
曾经优秀的人,怎么就突然不优秀了。
职场上有很多辛酸事,很多合伙人出局的故事,很多技术骨干被裁员的故事。说来模板都类似,曾经是名校毕业,曾经是优秀员工,曾经被领导表扬,曾经业绩突出,然而突然有一天,因为种种原因,被裁员了,...
C语言荣获2019年度最佳编程语言
关注、星标公众号,不错过精彩内容作者:黄工公众号:strongerHuang近日,TIOBE官方发布了2020年1月编程语言排行榜单。我在前面给过一篇文章《2019年11月C语言接近Ja...
大学四年因为知道了这32个网站,我成了别人眼中的大神!
依稀记得,毕业那天,我们导员发给我毕业证的时候对我说“你可是咱们系的风云人物啊”,哎呀,别提当时多开心啦????,嗯,我们导员是所有导员中最帅的一个,真的???? 不过,导员说的是实话,很多人都叫我大神的,为啥,因为我知道这32个网站啊,你说强不强????,这次是绝对的干货,看好啦,走起来! PS:每个网站都是学计算机混互联网必须知道的,真的牛杯,我就不过多介绍了,大家自行探索,觉得没用的,尽管留言吐槽吧???? 社...
良心推荐,我珍藏的一些Chrome插件
上次搬家的时候,发了一个朋友圈,附带的照片中不小心暴露了自己的 Chrome 浏览器插件之多,于是就有小伙伴评论说分享一下我觉得还不错的浏览器插件。 我下面就把我日常工作和学习中经常用到的一些 Chrome 浏览器插件分享给大家,随便一个都能提高你的“生活品质”和工作效率。 Markdown Here Markdown Here 可以让你更愉快的写邮件,由于支持 Markdown 直接转电子邮...
看完这篇HTTP,跟面试官扯皮就没问题了
我是一名程序员,我的主要编程语言是 Java,我更是一名 Web 开发人员,所以我必须要了解 HTTP,所以本篇文章就来带你从 HTTP 入门到进阶,看完让你有一种恍然大悟、醍醐灌顶的感觉。 最初在有网络之前,我们的电脑都是单机的,单机系统是孤立的,我还记得 05 年前那会儿家里有个电脑,想打电脑游戏还得两个人在一个电脑上玩儿,及其不方便。我就想为什么家里人不让上网,我的同学 xxx 家里有网,每...
应届生/社招面试最爱问的几道Java基础问题
本文已经收录自笔者开源的 JavaGuide: https://github.com/Snailclimb (【Java学习 面试指南】 一份涵盖大部分Java程序员所需要掌握的核心知识)如果觉得不错的还,不妨去点个Star,鼓励一下! 一 为什么 Java 中只有值传递? 首先回顾一下在程序设计语言中有关将参数传递给方法(或函数)的一些专业术语。按值调用(call by value)表...
史上最全的IDEA快捷键总结
现在Idea成了主流开发工具,这篇博客对其使用的快捷键做了总结,希望对大家的开发工作有所帮助。
阿里程序员写了一个新手都写不出的低级bug,被骂惨了。
这种新手都不会范的错,居然被一个工作好几年的小伙子写出来,差点被当场开除了。
谁是华为扫地僧?
是的,华为也有扫地僧!2020年2月11-12日,“养在深闺人不知”的华为2012实验室扫地僧们,将在华为开发者大会2020(Cloud)上,和大家见面。到时,你可以和扫地僧们,吃一个洋...
Idea 中最常用的10款插件(提高开发效率),一定要学会使用!
学习使用一些插件,可以提高开发效率。对于我们开发人员很有帮助。这篇博客介绍了开发中使用的插件。
AI 没让人类失业,搞 AI 的人先失业了
最近和几个 AI 领域的大佬闲聊 根据他们讲的消息和段子 改编出下面这个故事 如有雷同 都是巧合 1. 老王创业失败,被限制高消费 “这里写我跑路的消息实在太夸张了。” 王葱葱哼笑一下,把消息分享给群里。 阿杰也看了消息,笑了笑。在座几位也都笑了。 王葱葱是个有名的人物,21岁那年以全额奖学金进入 KMU 攻读人工智能博士,累计发表论文 40 余篇,个人技术博客更是成为深度学习领域内风向标。 ...
2020年,冯唐49岁:我给20、30岁IT职场年轻人的建议
点击“技术领导力”关注∆每天早上8:30推送 作者|Mr.K 编辑| Emma 来源|技术领导力(ID:jishulingdaoli) 前天的推文《冯唐:职场人35岁以后,方法论比经验重要》,收到了不少读者的反馈,觉得挺受启发。其实,冯唐写了不少关于职场方面的文章,都挺不错的。可惜大家只记住了“春风十里不如你”、“如何避免成为油腻腻的中年人”等不那么正经的文章。 本文整理了冯...
最全最强!世界大学计算机专业排名总结!
我正在参与CSDN200进20,希望得到您的支持,扫码续投票5次。感谢您! (为表示感谢,您投票后私信我,我把我总结的人工智能手推笔记和思维导图发送给您,感谢!) 目录 泰晤士高等教育世界大学排名 QS 世界大学排名 US News 世界大学排名 世界大学学术排名(Academic Ranking of World Universities) 泰晤士高等教育世界大学排名 中国共...
一份王者荣耀的英雄数据报告
咪哥杂谈本篇阅读时间约为 6 分钟。1前言前一阵写了关于王者的一些系列文章,从数据的获取到数据清洗,数据落地,都是为了本篇的铺垫。今天来实现一下,看看不同维度得到的结论。2环境准备本次实...
作为一名大学生,如何在B站上快乐的学习?
B站是个宝,谁用谁知道???? 作为一名大学生,你必须掌握的一项能力就是自学能力,很多看起来很牛X的人,你可以了解下,人家私底下一定是花大量的时间自学的,你可能会说,我也想学习啊,可是嘞,该学习啥嘞,不怕告诉你,互联网时代,最不缺的就是学习资源,最宝贵的是啥? 你可能会说是时间,不,不是时间,而是你的注意力,懂了吧! 那么,你说学习资源多,我咋不知道,那今天我就告诉你一个你必须知道的学习的地方,人称...
那些年,我们信了课本里的那些鬼话
教材永远都是有错误的,从小学到大学,我们不断的学习了很多错误知识。 斑羚飞渡 在我们学习的很多小学课文里,有很多是错误文章,或者说是假课文。像《斑羚飞渡》: 随着镰刀头羊的那声吼叫,整个斑羚群迅速分成两拨,老年斑羚为一拨,年轻斑羚为一拨。 就在这时,我看见,从那拨老斑羚里走出一只公斑羚来。公斑羚朝那拨年轻斑羚示意性地咩了一声,一只半大的斑羚应声走了出来。一老一少走到伤心崖,后退了几步,突...
一个程序在计算机中是如何运行的?超级干货!!!
强烈声明:本文很干,请自备茶水!???? 开门见山,咱不说废话! 你有没有想过,你写的程序,是如何在计算机中运行的吗?比如我们搞Java的,肯定写过这段代码 public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!"); } ...
【蘑菇街技术部年会】程序员与女神共舞,鼻血再次没止住。(文末内推)
蘑菇街技术部的年会,别开生面,一样全是美女。
那个在阿里养猪的工程师,5年了……
简介: 在阿里,走过1825天,没有趴下,依旧斗志满满,被称为“五年陈”。他们会被授予一枚戒指,过程就叫做“授戒仪式”。今天,咱们听听阿里的那些“五年陈”们的故事。 下一个五年,猪圈见! 我就是那个在养猪场里敲代码的工程师,一年多前我和20位工程师去了四川的猪场,出发前总架构师慷慨激昂的说:同学们,中国的养猪产业将因为我们而改变。但到了猪场,发现根本不是那么回事:要个WIFI,没有;...
为什么程序猿都不愿意去外包?
分享外包的组织架构,盈利模式,亲身经历,以及根据一些外包朋友的反馈,写了这篇文章 ,希望对正在找工作的老铁有所帮助
Java校招入职华为,半年后我跑路了
何来 我,一个双非本科弟弟,有幸在 19 届的秋招中得到前东家华为(以下简称 hw)的赏识,当时秋招签订就业协议,说是入了某 java bg,之后一系列组织架构调整原因等等让人无法理解的神操作,最终毕业前夕,被通知调往其他 bg 做嵌入式开发(纯 C 语言)。 由于已至于校招末尾,之前拿到的其他 offer 又无法再收回,一时感到无力回天,只得默默接受。 毕业后,直接入职开始了嵌入式苦旅,由于从未...
世界上有哪些代码量很少,但很牛逼很经典的算法或项目案例?
点击上方蓝字设为星标下面开始今天的学习~今天分享四个代码量很少,但很牛逼很经典的算法或项目案例。1、no code 项目地址:https://github.com/kelseyhight...
Python全栈 Linux基础之3.Linux常用命令
Linux对文件(包括目录)有很多常用命令,可以加快开发效率:ls是列出当前目录下的文件列表,选项有-a、-l、-h,还可以使用通配符;c功能是跳转目录,可以使用相对路径和绝对路径;mkdir命令创建一个新的目录,有-p选项,rm删除文件或目录,有-f、-r选项;cp用于复制文件,有-i、-r选项,tree命令可以将目录结构显示出来(树状显示),有-d选项,mv用来移动文件/目录,有-i选项;cat查看文件内容,more分屏显示文件内容,grep搜索内容;>、>>将执行结果重定向到一个文件;|用于管道输出。
​两年前不知如何编写代码的我,现在是一名人工智能工程师
全文共3526字,预计学习时长11分钟 图源:Unsplash 经常有小伙伴私信给小芯,我没有编程基础,不会写代码,如何进入AI行业呢?还能赶上AI浪潮吗? 任何时候努力都不算晚。 下面,小芯就给大家讲一个朋友的真实故事,希望能给那些处于迷茫与徘徊中的小伙伴们一丝启发。(下文以第一人称叙述) 图源:Unsplash 正如Elsa所说,职业转换是...
强烈推荐10本程序员必读的书
很遗憾,这个春节注定是刻骨铭心的,新型冠状病毒让每个人的神经都是紧绷的。那些处在武汉的白衣天使们,尤其值得我们的尊敬。而我们这些窝在家里的程序员,能不外出就不外出,就是对社会做出的最大的贡献。 有些读者私下问我,窝了几天,有点颓丧,能否推荐几本书在家里看看。我花了一天的时间,挑选了 10 本我最喜欢的书,你可以挑选感兴趣的来读一读。读书不仅可以平复恐惧的压力,还可以对未来充满希望,毕竟苦难终将会...
非典逼出了淘宝和京东,新冠病毒能够逼出什么?
loonggg读完需要5分钟速读仅需 2 分钟大家好,我是你们的校长。我知道大家在家里都憋坏了,大家可能相对于封闭在家里“坐月子”,更希望能够早日上班。今天我带着大家换个思路来聊一个问题...
立即提问