TensorFlow训练卷积神经网络中,输入数据必须是什么类型的? 5C

图片说明

3个回答

可以feed的数据类型: Python scalars, strings, lists, numpy ndarrays, or TensorHandles.
可以fetch的数据类型:Tensor, string.
在运行图的时,tensor用sess.run()取出来,然后再feed进去。

numpy数组形式就可以吧

numpy,h5都可以

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
关于卷积神经网络批标准化的问题

在批标准化(BN)操作中,如果batch size大小为32,特征图深度为16,那么该BN层的总参数以及可训练参数个数分别是多少?该怎么计算?

tensorflow cnn网络怎么以矩阵为输入形式呢?

刚开始学的时候用的mnist数据集其实每张图片就是28x28的二维矩阵,但是因为mnist的加载方式属实太特殊了如下代码: #数据集 data_dir = 'MNIST_data' mnist = read_data_sets(data_dir) train_xdata = np.array([np.reshape(x,[28,28]) for x in mnist.train.images] ) test_xdata = np.array([np.reshape(x,[28,28]) for x in mnist.test.images] ) train_labels = mnist.train.labels test_labels = mnist.test.labels 直接就能读出需要的格式,我如果想从本地读自己的矩阵txt,前两句应该怎么改? 有没有大佬能详细写写的,谢谢!!!!

基于tensorflow的pix2pix代码中如何做到输入图像和输出图像分辨率不一致

问题:例如在自己制作了成对的输入(input256×256 target 200×256)后,如何让输入图像和输出图像分辨率不一致,例如成对图像中:input的分辨率是256×256, output 和target都是200×256,需要修改哪里的参数。 论文参考:《Image-to-Image Translation with Conditional Adversarial Networks》 代码参考:https://blog.csdn.net/MOU_IT/article/details/80802407?utm_source=blogxgwz0 # coding=utf-8 from __future__ import absolute_import from __future__ import division from __future__ import print_function import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' import tensorflow as tf import numpy as np import os import glob import random import collections import math import time # https://github.com/affinelayer/pix2pix-tensorflow train_input_dir = "D:/Project/pix2pix-tensorflow-master/facades/train/" # 训练集输入 train_output_dir = "D:/Project/pix2pix-tensorflow-master/facades/train_out/" # 训练集输出 test_input_dir = "D:/Project/pix2pix-tensorflow-master/facades/val/" # 测试集输入 test_output_dir = "D:/Project/pix2pix-tensorflow-master/facades/test_out/" # 测试集的输出 checkpoint = "D:/Project/pix2pix-tensorflow-master/facades/train_out/" # 保存结果的目录 seed = None max_steps = None # number of training steps (0 to disable) max_epochs = 200 # number of training epochs progress_freq = 50 # display progress every progress_freq steps trace_freq = 0 # trace execution every trace_freq steps display_freq = 50 # write current training images every display_freq steps save_freq = 500 # save model every save_freq steps, 0 to disable separable_conv = False # use separable convolutions in the generator aspect_ratio = 1 #aspect ratio of output images (width/height) batch_size = 1 # help="number of images in batch") which_direction = "BtoA" # choices=["AtoB", "BtoA"]) ngf = 64 # help="number of generator filters in first conv layer") ndf = 64 # help="number of discriminator filters in first conv layer") scale_size = 286 # help="scale images to this size before cropping to 256x256") flip = True # flip images horizontally no_flip = True # don't flip images horizontally lr = 0.0002 # initial learning rate for adam beta1 = 0.5 # momentum term of adam l1_weight = 100.0 # weight on L1 term for generator gradient gan_weight = 1.0 # weight on GAN term for generator gradient output_filetype = "png" # 输出图像的格式 EPS = 1e-12 # 极小数,防止梯度为损失为0 CROP_SIZE = 256 # 图片的裁剪大小 # 命名元组,用于存放加载的数据集合创建好的模型 Examples = collections.namedtuple("Examples", "paths, inputs, targets, count, steps_per_epoch") Model = collections.namedtuple("Model", "outputs, predict_real, predict_fake, discrim_loss, discrim_grads_and_vars, gen_loss_GAN, gen_loss_L1, gen_grads_and_vars, train") # 图像预处理 [0, 1] => [-1, 1] def preprocess(image): with tf.name_scope("preprocess"): return image * 2 - 1 # 图像后处理[-1, 1] => [0, 1] def deprocess(image): with tf.name_scope("deprocess"): return (image + 1) / 2 # 判别器的卷积定义,batch_input为 [ batch , 256 , 256 , 6 ] def discrim_conv(batch_input, out_channels, stride): # [ batch , 256 , 256 , 6 ] ===>[ batch , 258 , 258 , 6 ] padded_input = tf.pad(batch_input, [[0, 0], [1, 1], [1, 1], [0, 0]], mode="CONSTANT") ''' [0,0]: 第一维batch大小不扩充 [1,1]:第二维图像宽度左右各扩充一列,用0填充 [1,1]:第三维图像高度上下各扩充一列,用0填充 [0,0]:第四维图像通道不做扩充 ''' return tf.layers.conv2d(padded_input, out_channels, kernel_size=4, strides=(stride, stride), padding="valid", kernel_initializer=tf.random_normal_initializer(0, 0.02)) # 生成器的卷积定义,卷积核为4*4,步长为2,输出图像为输入的一半 def gen_conv(batch_input, out_channels): # [batch, in_height, in_width, in_channels] => [batch, out_height, out_width, out_channels] initializer = tf.random_normal_initializer(0, 0.02) if separable_conv: return tf.layers.separable_conv2d(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", depthwise_initializer=initializer, pointwise_initializer=initializer) else: return tf.layers.conv2d(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", kernel_initializer=initializer) # 生成器的反卷积定义 def gen_deconv(batch_input, out_channels): # [batch, in_height, in_width, in_channels] => [batch, out_height, out_width, out_channels] initializer = tf.random_normal_initializer(0, 0.02) if separable_conv: _b, h, w, _c = batch_input.shape resized_input = tf.image.resize_images(batch_input, [h * 2, w * 2], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) return tf.layers.separable_conv2d(resized_input, out_channels, kernel_size=4, strides=(1, 1), padding="same", depthwise_initializer=initializer, pointwise_initializer=initializer) else: return tf.layers.conv2d_transpose(batch_input, out_channels, kernel_size=4, strides=(2, 2), padding="same", kernel_initializer=initializer) # 定义LReLu激活函数 def lrelu(x, a): with tf.name_scope("lrelu"): # adding these together creates the leak part and linear part # then cancels them out by subtracting/adding an absolute value term # leak: a*x/2 - a*abs(x)/2 # linear: x/2 + abs(x)/2 # this block looks like it has 2 inputs on the graph unless we do this x = tf.identity(x) return (0.5 * (1 + a)) * x + (0.5 * (1 - a)) * tf.abs(x) # 批量归一化图像 def batchnorm(inputs): return tf.layers.batch_normalization(inputs, axis=3, epsilon=1e-5, momentum=0.1, training=True, gamma_initializer=tf.random_normal_initializer(1.0, 0.02)) # 检查图像的维度 def check_image(image): assertion = tf.assert_equal(tf.shape(image)[-1], 3, message="image must have 3 color channels") with tf.control_dependencies([assertion]): image = tf.identity(image) if image.get_shape().ndims not in (3, 4): raise ValueError("image must be either 3 or 4 dimensions") # make the last dimension 3 so that you can unstack the colors shape = list(image.get_shape()) shape[-1] = 3 image.set_shape(shape) return image # 去除文件的后缀,获取文件名 def get_name(path): # os.path.basename(),返回path最后的文件名。若path以/或\结尾,那么就会返回空值。 # os.path.splitext(),分离文件名与扩展名;默认返回(fname,fextension)元组 name, _ = os.path.splitext(os.path.basename(path)) return name # 加载数据集,从文件读取-->解码-->归一化--->拆分为输入和目标-->像素转为[-1,1]-->转变形状 def load_examples(input_dir): if input_dir is None or not os.path.exists(input_dir): raise Exception("input_dir does not exist") # 匹配第一个参数的路径中所有的符合条件的文件,并将其以list的形式返回。 input_paths = glob.glob(os.path.join(input_dir, "*.jpg")) # 图像解码器 decode = tf.image.decode_jpeg if len(input_paths) == 0: input_paths = glob.glob(os.path.join(input_dir, "*.png")) decode = tf.image.decode_png if len(input_paths) == 0: raise Exception("input_dir contains no image files") # 如果文件名是数字,则用数字进行排序,否则用字母排序 if all(get_name(path).isdigit() for path in input_paths): input_paths = sorted(input_paths, key=lambda path: int(get_name(path))) else: input_paths = sorted(input_paths) sess = tf.Session() with tf.name_scope("load_images"): # 把我们需要的全部文件打包为一个tf内部的queue类型,之后tf开文件就从这个queue中取目录了, # 如果是训练模式时,shuffle为True path_queue = tf.train.string_input_producer(input_paths, shuffle=True) # Read的输出将是一个文件名(key)和该文件的内容(value,每次读取一个文件,分多次读取)。 reader = tf.WholeFileReader() paths, contents = reader.read(path_queue) # 对文件进行解码并且对图片作归一化处理 raw_input = decode(contents) raw_input = tf.image.convert_image_dtype(raw_input, dtype=tf.float32) # 归一化处理 # 判断两个值知否相等,如果不等抛出异常 assertion = tf.assert_equal(tf.shape(raw_input)[2], 3, message="image does not have 3 channels") ''' 对于control_dependencies这个管理器,只有当里面的操作是一个op时,才会生效,也就是先执行传入的 参数op,再执行里面的op。如果里面的操作不是定义的op,图中就不会形成一个节点,这样该管理器就失效了。 tf.identity是返回一个一模一样新的tensor的op,这会增加一个新节点到gragh中,这时control_dependencies就会生效. ''' with tf.control_dependencies([assertion]): raw_input = tf.identity(raw_input) raw_input.set_shape([None, None, 3]) # 图像值由[0,1]--->[-1, 1] width = tf.shape(raw_input)[1] # [height, width, channels] a_images = preprocess(raw_input[:, :width // 2, :]) # 256*256*3 b_images = preprocess(raw_input[:, width // 2:, :]) # 256*256*3 # 这里的which_direction为:BtoA if which_direction == "AtoB": inputs, targets = [a_images, b_images] elif which_direction == "BtoA": inputs, targets = [b_images, a_images] else: raise Exception("invalid direction") # synchronize seed for image operations so that we do the same operations to both # input and output images seed = random.randint(0, 2 ** 31 - 1) # 图像预处理,翻转、改变形状 with tf.name_scope("input_images"): input_images = transform(inputs) with tf.name_scope("target_images"): target_images = transform(targets) # 获得输入图像、目标图像的batch块 paths_batch, inputs_batch, targets_batch = tf.train.batch([paths, input_images, target_images], batch_size=batch_size) steps_per_epoch = int(math.ceil(len(input_paths) / batch_size)) return Examples( paths=paths_batch, # 输入的文件名块 inputs=inputs_batch, # 输入的图像块 targets=targets_batch, # 目标图像块 count=len(input_paths), # 数据集的大小 steps_per_epoch=steps_per_epoch, # batch的个数 ) # 图像预处理,翻转、改变形状 def transform(image): r = image if flip: r = tf.image.random_flip_left_right(r, seed=seed) # area produces a nice downscaling, but does nearest neighbor for upscaling # assume we're going to be doing downscaling here r = tf.image.resize_images(r, [scale_size, scale_size], method=tf.image.ResizeMethod.AREA) offset = tf.cast(tf.floor(tf.random_uniform([2], 0, scale_size - CROP_SIZE + 1, seed=seed)), dtype=tf.int32) if scale_size > CROP_SIZE: r = tf.image.crop_to_bounding_box(r, offset[0], offset[1], CROP_SIZE, CROP_SIZE) elif scale_size < CROP_SIZE: raise Exception("scale size cannot be less than crop size") return r # 创建生成器,这是一个编码解码器的变种,输入输出均为:256*256*3, 像素值为[-1,1] def create_generator(generator_inputs, generator_outputs_channels): layers = [] # encoder_1: [batch, 256, 256, in_channels] => [batch, 128, 128, ngf] with tf.variable_scope("encoder_1"): output = gen_conv(generator_inputs, ngf) # ngf为第一个卷积层的卷积核核数量,默认为 64 layers.append(output) layer_specs = [ ngf * 2, # encoder_2: [batch, 128, 128, ngf] => [batch, 64, 64, ngf * 2] ngf * 4, # encoder_3: [batch, 64, 64, ngf * 2] => [batch, 32, 32, ngf * 4] ngf * 8, # encoder_4: [batch, 32, 32, ngf * 4] => [batch, 16, 16, ngf * 8] ngf * 8, # encoder_5: [batch, 16, 16, ngf * 8] => [batch, 8, 8, ngf * 8] ngf * 8, # encoder_6: [batch, 8, 8, ngf * 8] => [batch, 4, 4, ngf * 8] ngf * 8, # encoder_7: [batch, 4, 4, ngf * 8] => [batch, 2, 2, ngf * 8] ngf * 8, # encoder_8: [batch, 2, 2, ngf * 8] => [batch, 1, 1, ngf * 8] ] # 卷积的编码器 for out_channels in layer_specs: with tf.variable_scope("encoder_%d" % (len(layers) + 1)): # 对最后一层使用激活函数 rectified = lrelu(layers[-1], 0.2) # [batch, in_height, in_width, in_channels] => [batch, in_height/2, in_width/2, out_channels] convolved = gen_conv(rectified, out_channels) output = batchnorm(convolved) layers.append(output) layer_specs = [ (ngf * 8, 0.5), # decoder_8: [batch, 1, 1, ngf * 8] => [batch, 2, 2, ngf * 8 * 2] (ngf * 8, 0.5), # decoder_7: [batch, 2, 2, ngf * 8 * 2] => [batch, 4, 4, ngf * 8 * 2] (ngf * 8, 0.5), # decoder_6: [batch, 4, 4, ngf * 8 * 2] => [batch, 8, 8, ngf * 8 * 2] (ngf * 8, 0.0), # decoder_5: [batch, 8, 8, ngf * 8 * 2] => [batch, 16, 16, ngf * 8 * 2] (ngf * 4, 0.0), # decoder_4: [batch, 16, 16, ngf * 8 * 2] => [batch, 32, 32, ngf * 4 * 2] (ngf * 2, 0.0), # decoder_3: [batch, 32, 32, ngf * 4 * 2] => [batch, 64, 64, ngf * 2 * 2] (ngf, 0.0), # decoder_2: [batch, 64, 64, ngf * 2 * 2] => [batch, 128, 128, ngf * 2] ] # 卷积的解码器 num_encoder_layers = len(layers) # 8 for decoder_layer, (out_channels, dropout) in enumerate(layer_specs): skip_layer = num_encoder_layers - decoder_layer - 1 with tf.variable_scope("decoder_%d" % (skip_layer + 1)): if decoder_layer == 0: # first decoder layer doesn't have skip connections # since it is directly connected to the skip_layer input = layers[-1] else: input = tf.concat([layers[-1], layers[skip_layer]], axis=3) rectified = tf.nn.relu(input) # [batch, in_height, in_width, in_channels] => [batch, in_height*2, in_width*2, out_channels] output = gen_deconv(rectified, out_channels) output = batchnorm(output) if dropout > 0.0: output = tf.nn.dropout(output, keep_prob=1 - dropout) layers.append(output) # decoder_1: [batch, 128, 128, ngf * 2] => [batch, 256, 256, generator_outputs_channels] with tf.variable_scope("decoder_1"): input = tf.concat([layers[-1], layers[0]], axis=3) rectified = tf.nn.relu(input) output = gen_deconv(rectified, generator_outputs_channels) output = tf.tanh(output) layers.append(output) return layers[-1] # 创建判别器,输入生成的图像和真实的图像:两个[batch,256,256,3],元素值值[-1,1],输出:[batch,30,30,1],元素值为概率 def create_discriminator(discrim_inputs, discrim_targets): n_layers = 3 layers = [] # 2x [batch, height, width, in_channels] => [batch, height, width, in_channels * 2] input = tf.concat([discrim_inputs, discrim_targets], axis=3) # layer_1: [batch, 256, 256, in_channels * 2] => [batch, 128, 128, ndf] with tf.variable_scope("layer_1"): convolved = discrim_conv(input, ndf, stride=2) rectified = lrelu(convolved, 0.2) layers.append(rectified) # layer_2: [batch, 128, 128, ndf] => [batch, 64, 64, ndf * 2] # layer_3: [batch, 64, 64, ndf * 2] => [batch, 32, 32, ndf * 4] # layer_4: [batch, 32, 32, ndf * 4] => [batch, 31, 31, ndf * 8] for i in range(n_layers): with tf.variable_scope("layer_%d" % (len(layers) + 1)): out_channels = ndf * min(2 ** (i + 1), 8) stride = 1 if i == n_layers - 1 else 2 # last layer here has stride 1 convolved = discrim_conv(layers[-1], out_channels, stride=stride) normalized = batchnorm(convolved) rectified = lrelu(normalized, 0.2) layers.append(rectified) # layer_5: [batch, 31, 31, ndf * 8] => [batch, 30, 30, 1] with tf.variable_scope("layer_%d" % (len(layers) + 1)): convolved = discrim_conv(rectified, out_channels=1, stride=1) output = tf.sigmoid(convolved) layers.append(output) return layers[-1] # 创建Pix2Pix模型,inputs和targets形状为:[batch_size, height, width, channels] def create_model(inputs, targets): with tf.variable_scope("generator"): out_channels = int(targets.get_shape()[-1]) outputs = create_generator(inputs, out_channels) # create two copies of discriminator, one for real pairs and one for fake pairs # they share the same underlying variables with tf.name_scope("real_discriminator"): with tf.variable_scope("discriminator"): # 2x [batch, height, width, channels] => [batch, 30, 30, 1] predict_real = create_discriminator(inputs, targets) # 条件变量图像和真实图像 with tf.name_scope("fake_discriminator"): with tf.variable_scope("discriminator", reuse=True): # 2x [batch, height, width, channels] => [batch, 30, 30, 1] predict_fake = create_discriminator(inputs, outputs) # 条件变量图像和生成的图像 # 判别器的损失,判别器希望V(G,D)尽可能大 with tf.name_scope("discriminator_loss"): # minimizing -tf.log will try to get inputs to 1 # predict_real => 1 # predict_fake => 0 discrim_loss = tf.reduce_mean(-(tf.log(predict_real + EPS) + tf.log(1 - predict_fake + EPS))) # 生成器的损失,生成器希望V(G,D)尽可能小 with tf.name_scope("generator_loss"): # predict_fake => 1 # abs(targets - outputs) => 0 gen_loss_GAN = tf.reduce_mean(-tf.log(predict_fake + EPS)) gen_loss_L1 = tf.reduce_mean(tf.abs(targets - outputs)) gen_loss = gen_loss_GAN * gan_weight + gen_loss_L1 * l1_weight # 判别器训练 with tf.name_scope("discriminator_train"): # 判别器需要优化的参数 discrim_tvars = [var for var in tf.trainable_variables() if var.name.startswith("discriminator")] # 优化器定义 discrim_optim = tf.train.AdamOptimizer(lr, beta1) # 计算损失函数对优化参数的梯度 discrim_grads_and_vars = discrim_optim.compute_gradients(discrim_loss, var_list=discrim_tvars) # 更新该梯度所对应的参数的状态,返回一个op discrim_train = discrim_optim.apply_gradients(discrim_grads_and_vars) # 生成器训练 with tf.name_scope("generator_train"): with tf.control_dependencies([discrim_train]): # 生成器需要优化的参数列表 gen_tvars = [var for var in tf.trainable_variables() if var.name.startswith("generator")] # 定义优化器 gen_optim = tf.train.AdamOptimizer(lr, beta1) # 计算需要优化的参数的梯度 gen_grads_and_vars = gen_optim.compute_gradients(gen_loss, var_list=gen_tvars) # 更新该梯度所对应的参数的状态,返回一个op gen_train = gen_optim.apply_gradients(gen_grads_and_vars) ''' 在采用随机梯度下降算法训练神经网络时,使用 tf.train.ExponentialMovingAverage 滑动平均操作的意义在于 提高模型在测试数据上的健壮性(robustness)。tensorflow 下的 tf.train.ExponentialMovingAverage 需要 提供一个衰减率(decay)。该衰减率用于控制模型更新的速度。该衰减率用于控制模型更新的速度, ExponentialMovingAverage 对每一个(待更新训练学习的)变量(variable)都会维护一个影子变量 (shadow variable)。影子变量的初始值就是这个变量的初始值, shadow_variable=decay×shadow_variable+(1−decay)×variable ''' ema = tf.train.ExponentialMovingAverage(decay=0.99) update_losses = ema.apply([discrim_loss, gen_loss_GAN, gen_loss_L1]) # global_step = tf.train.get_or_create_global_step() incr_global_step = tf.assign(global_step, global_step + 1) return Model( predict_real=predict_real, # 条件变量(输入图像)和真实图像之间的概率值,形状为;[batch,30,30,1] predict_fake=predict_fake, # 条件变量(输入图像)和生成图像之间的概率值,形状为;[batch,30,30,1] discrim_loss=ema.average(discrim_loss), # 判别器损失 discrim_grads_and_vars=discrim_grads_and_vars, # 判别器需要优化的参数和对应的梯度 gen_loss_GAN=ema.average(gen_loss_GAN), # 生成器的损失 gen_loss_L1=ema.average(gen_loss_L1), # 生成器的 L1损失 gen_grads_and_vars=gen_grads_and_vars, # 生成器需要优化的参数和对应的梯度 outputs=outputs, # 生成器生成的图片 train=tf.group(update_losses, incr_global_step, gen_train), # 打包需要run的操作op ) # 保存图像 def save_images(output_dir, fetches, step=None): image_dir = os.path.join(output_dir, "images") if not os.path.exists(image_dir): os.makedirs(image_dir) filesets = [] for i, in_path in enumerate(fetches["paths"]): name, _ = os.path.splitext(os.path.basename(in_path.decode("utf8"))) fileset = {"name": name, "step": step} for kind in ["inputs", "outputs", "targets"]: filename = name + "-" + kind + ".png" if step is not None: filename = "%08d-%s" % (step, filename) fileset[kind] = filename out_path = os.path.join(image_dir, filename) contents = fetches[kind][i] with open(out_path, "wb") as f: f.write(contents) filesets.append(fileset) return filesets # 将结果写入HTML网页 def append_index(output_dir, filesets, step=False): index_path = os.path.join(output_dir, "index.html") if os.path.exists(index_path): index = open(index_path, "a") else: index = open(index_path, "w") index.write("<html><body><table><tr>") if step: index.write("<th>step</th>") index.write("<th>name</th><th>input</th><th>output</th><th>target</th></tr>") for fileset in filesets: index.write("<tr>") if step: index.write("<td>%d</td>" % fileset["step"]) index.write("<td>%s</td>" % fileset["name"]) for kind in ["inputs", "outputs", "targets"]: index.write("<td><img src='images/%s'></td>" % fileset[kind]) index.write("</tr>") return index_path # 转变图像的尺寸、并且将[0,1]--->[0,255] def convert(image): if aspect_ratio != 1.0: # upscale to correct aspect ratio size = [CROP_SIZE, int(round(CROP_SIZE * aspect_ratio))] image = tf.image.resize_images(image, size=size, method=tf.image.ResizeMethod.BICUBIC) # 将数据的类型转换为8位无符号整型 return tf.image.convert_image_dtype(image, dtype=tf.uint8, saturate=True) # 主函数 def train(): # 设置随机数种子的值 global seed if seed is None: seed = random.randint(0, 2 ** 31 - 1) tf.set_random_seed(seed) np.random.seed(seed) random.seed(seed) # 创建目录 if not os.path.exists(train_output_dir): os.makedirs(train_output_dir) # 加载数据集,得到输入数据和目标数据并把范围变为 :[-1,1] examples = load_examples(train_input_dir) print("load successful ! examples count = %d" % examples.count) # 创建模型,inputs和targets是:[batch_size, height, width, channels] # 返回值: model = create_model(examples.inputs, examples.targets) print("create model successful!") # 图像处理[-1, 1] => [0, 1] inputs = deprocess(examples.inputs) targets = deprocess(examples.targets) outputs = deprocess(model.outputs) # 把[0,1]的像素点转为RGB值:[0,255] with tf.name_scope("convert_inputs"): converted_inputs = convert(inputs) with tf.name_scope("convert_targets"): converted_targets = convert(targets) with tf.name_scope("convert_outputs"): converted_outputs = convert(outputs) # 对图像进行编码以便于保存 with tf.name_scope("encode_images"): display_fetches = { "paths": examples.paths, # tf.map_fn接受一个函数对象和集合,用函数对集合中每个元素分别处理 "inputs": tf.map_fn(tf.image.encode_png, converted_inputs, dtype=tf.string, name="input_pngs"), "targets": tf.map_fn(tf.image.encode_png, converted_targets, dtype=tf.string, name="target_pngs"), "outputs": tf.map_fn(tf.image.encode_png, converted_outputs, dtype=tf.string, name="output_pngs"), } with tf.name_scope("parameter_count"): parameter_count = tf.reduce_sum([tf.reduce_prod(tf.shape(v)) for v in tf.trainable_variables()]) # 只保存最新一个checkpoint saver = tf.train.Saver(max_to_keep=20) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print("parameter_count =", sess.run(parameter_count)) if max_epochs is not None: max_steps = examples.steps_per_epoch * max_epochs # 400X200=80000 # 因为是从文件中读取数据,所以需要启动start_queue_runners() # 这个函数将会启动输入管道的线程,填充样本到队列中,以便出队操作可以从队列中拿到样本。 coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) # 运行训练集 print("begin trainning......") print("max_steps:", max_steps) start = time.time() for step in range(max_steps): def should(freq): return freq > 0 and ((step + 1) % freq == 0 or step == max_steps - 1) print("step:", step) # 定义一个需要run的所有操作的字典 fetches = { "train": model.train } # progress_freq为 50,每50次计算一次三个损失,显示进度 if should(progress_freq): fetches["discrim_loss"] = model.discrim_loss fetches["gen_loss_GAN"] = model.gen_loss_GAN fetches["gen_loss_L1"] = model.gen_loss_L1 # display_freq为 50,每50次保存一次输入、目标、输出的图像 if should(display_freq): fetches["display"] = display_fetches # 运行各种操作, results = sess.run(fetches) # display_freq为 50,每50次保存输入、目标、输出的图像 if should(display_freq): print("saving display images") filesets = save_images(train_output_dir, results["display"], step=step) append_index(train_output_dir, filesets, step=True) # progress_freq为 50,每50次打印一次三种损失的大小,显示进度 if should(progress_freq): # global_step will have the correct step count if we resume from a checkpoint train_epoch = math.ceil(step / examples.steps_per_epoch) train_step = (step - 1) % examples.steps_per_epoch + 1 rate = (step + 1) * batch_size / (time.time() - start) remaining = (max_steps - step) * batch_size / rate print("progress epoch %d step %d image/sec %0.1f remaining %dm" % ( train_epoch, train_step, rate, remaining / 60)) print("discrim_loss", results["discrim_loss"]) print("gen_loss_GAN", results["gen_loss_GAN"]) print("gen_loss_L1", results["gen_loss_L1"]) # save_freq为500,每500次保存一次模型 if should(save_freq): print("saving model") saver.save(sess, os.path.join(train_output_dir, "model"), global_step=step) # 测试 def test(): # 设置随机数种子的值 global seed if seed is None: seed = random.randint(0, 2 ** 31 - 1) tf.set_random_seed(seed) np.random.seed(seed) random.seed(seed) # 创建目录 if not os.path.exists(test_output_dir): os.makedirs(test_output_dir) if checkpoint is None: raise Exception("checkpoint required for test mode") # disable these features in test mode scale_size = CROP_SIZE flip = False # 加载数据集,得到输入数据和目标数据 examples = load_examples(test_input_dir) print("load successful ! examples count = %d" % examples.count) # 创建模型,inputs和targets是:[batch_size, height, width, channels] model = create_model(examples.inputs, examples.targets) print("create model successful!") # 图像处理[-1, 1] => [0, 1] inputs = deprocess(examples.inputs) targets = deprocess(examples.targets) outputs = deprocess(model.outputs) # 把[0,1]的像素点转为RGB值:[0,255] with tf.name_scope("convert_inputs"): converted_inputs = convert(inputs) with tf.name_scope("convert_targets"): converted_targets = convert(targets) with tf.name_scope("convert_outputs"): converted_outputs = convert(outputs) # 对图像进行编码以便于保存 with tf.name_scope("encode_images"): display_fetches = { "paths": examples.paths, # tf.map_fn接受一个函数对象和集合,用函数对集合中每个元素分别处理 "inputs": tf.map_fn(tf.image.encode_png, converted_inputs, dtype=tf.string, name="input_pngs"), "targets": tf.map_fn(tf.image.encode_png, converted_targets, dtype=tf.string, name="target_pngs"), "outputs": tf.map_fn(tf.image.encode_png, converted_outputs, dtype=tf.string, name="output_pngs"), } sess = tf.InteractiveSession() saver = tf.train.Saver(max_to_keep=1) ckpt = tf.train.get_checkpoint_state(checkpoint) saver.restore(sess,ckpt.model_checkpoint_path) start = time.time() coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for step in range(examples.count): results = sess.run(display_fetches) filesets = save_images(test_output_dir, results) for i, f in enumerate(filesets): print("evaluated image", f["name"]) index_path = append_index(test_output_dir, filesets) print("wrote index at", index_path) print("rate", (time.time() - start) / max_steps) if __name__ == '__main__': train() #test()

神经网络按梯度下降训练,不应该用同一个样本迭代直到得到loss最小值?然后再继续下一个样本吗?

在学习神经网络训练方式,梯度下降方法有个疑问 1、如下图所示训练过程,迭代得话,不应该继续用同一个样本计算权重直到得到loss极值? 2、我的理解就是每个样本都对应一个loss极值,不是吗?但是如果用不同样本去训练,用A训练得到一个最小loss适合样本A,用B训练得到适合样本B的最小loss,那就不适合A了啊? ![图片说明](https://img-ask.csdn.net/upload/201907/16/1563289204_340670.png) 3、如下代码怎么理解,每个训练批次和重复使用样本的意义 其中feed_dict函数意义是什么:如果是30000步,每步喂一个批次8个样本是怎么训练的? ``` import tensorflow as tf import numpy as np BATCH_SIZE=8 seed=23455 ####1 样本和标签的生成 #基于seed产生随机数 rng=np.random.RandomState(seed) #随机数返回32行2列的矩阵,作为输入样本 X=rng.rand(32,2) #设置标签Y,给每行样本进行设置:和<1==>1 Y=[[int(x0+x1<1)] for(x0,x1) in X] print('X:\n',X) print('Y:\n',Y) ####2 定义神经网络输入、参数和输出,定义前向网络 x=tf.placeholder(tf.float32,shape=(None,2)) y_=tf.placeholder(tf.float32,shape=(None,1))#True lable #其中的3是隐藏层的神经元个数,列举:只有一个隐藏层(w1,w2) 有两个隐藏层(w1,w2,w3) #n个隐藏层有n+1个权值矩阵 w1= tf.Variable(tf.random_normal([2,3], stddev=1,seed=1)) w2= tf.Variable(tf.random_normal([3,1], stddev=1,seed=1)) a= tf.matmul(x,w1) y= tf.matmul(a,w2) ####3 定义损失函数及反向传播方法 loss=tf.reduce_mean(tf.square(y-y_))#loss函数:均方误差:差的平方和均值 train_step= tf.train.GradientDescentOptimizer(0.001).minimize(loss)#梯度下降方向传播算法 #train_step= tf.train.MomentumOptimizer(0.001,0.9).minimize(loss) #train_step= tf.train.AdamOptimizer(0.001).minimize(loss) ####4 创建会话 with tf.Session() as sess: init_op = tf.global_variables_initializer() sess.run(init_op) #输出未经训练,随机生成的参数 print('w1:\n', sess.run(w1)) print('w2:\n', sess.run(w2)) print('\n') ####5 训练模型 STEPS= 30000 for i in range(STEPS): start=(i*BATCH_SIZE)%32 end = start+BATCH_SIZE sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]}) if i % 500 == 0: total_loss=sess.run(loss, feed_dict={x: X, y_: Y}) print('After %d training steps, loss on all data is %g'%(i, total_loss)) ####6 输出训练后的参数 print('\n') print('w1:\n', sess.run(w1)) print('w2:\n', sess.run(w2)) ```

Tensorflow利用自制的数据集做图像识别,程序卡在读取tfrecord文件不跑

我利用自己的图片做了一个数据集训练神经网络,在feed数据的时候数据类型不合适,加了session.run()程序就卡在这里不动了,下面贴出代码,跪求大神指导。 程序卡在print(“begin4”)和print(“begin5”)之间 ``` import tensorflow as tf from encode_to_tfrecords import create_record, create_test_record, read_and_decode, get_batch, get_test_batch n_input = 154587 n_classes = 3 dropout = 0.5 x = tf.placeholder(tf.float32, [None, n_input]) y = tf.placeholder(tf.int32, [None, n_classes]) keep_drop = tf.placeholder(tf.float32) class network(object): def inference(self, images,keep_drop): #################################################################################################################### # 向量转为矩阵 # images = tf.reshape(images, shape=[-1, 39,39, 3]) images = tf.reshape(images, shape=[-1, 227, 227, 3]) # [batch, in_height, in_width, in_channels] images = (tf.cast(images, tf.float32) / 255. - 0.5) * 2 # 归一化处理 #################################################################################################################### # 第一层 定义卷积偏置和下采样 conv1 = tf.nn.bias_add(tf.nn.conv2d(images, self.weights['conv1'], strides=[1, 4, 4, 1], padding='VALID'), self.biases['conv1']) relu1 = tf.nn.relu(conv1) pool1 = tf.nn.max_pool(relu1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') # 第二层 conv2 = tf.nn.bias_add(tf.nn.conv2d(pool1, self.weights['conv2'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv2']) relu2 = tf.nn.relu(conv2) pool2 = tf.nn.max_pool(relu2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') # 第三层 conv3 = tf.nn.bias_add(tf.nn.conv2d(pool2, self.weights['conv3'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv3']) relu3 = tf.nn.relu(conv3) # pool3=tf.nn.max_pool(relu3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') conv4 = tf.nn.bias_add(tf.nn.conv2d(relu3, self.weights['conv4'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv4']) relu4 = tf.nn.relu(conv4) conv5 = tf.nn.bias_add(tf.nn.conv2d(relu4, self.weights['conv5'], strides=[1, 1, 1, 1], padding='SAME'), self.biases['conv5']) relu5 = tf.nn.relu(conv5) pool5 = tf.nn.max_pool(relu5, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='VALID') # 全连接层1,先把特征图转为向量 flatten = tf.reshape(pool5, [-1, self.weights['fc1'].get_shape().as_list()[0]]) # dropout比率选用0.5 drop1 = tf.nn.dropout(flatten, keep_drop) fc1 = tf.matmul(drop1, self.weights['fc1']) + self.biases['fc1'] fc_relu1 = tf.nn.relu(fc1) fc2 = tf.matmul(fc_relu1, self.weights['fc2']) + self.biases['fc2'] fc_relu2 = tf.nn.relu(fc2) fc3 = tf.matmul(fc_relu2, self.weights['fc3']) + self.biases['fc3'] return fc3 def __init__(self): # 初始化权值和偏置 with tf.variable_scope("weights"): self.weights = { # 39*39*3->36*36*20->18*18*20 'conv1': tf.get_variable('conv1', [11, 11, 3, 96], initializer=tf.contrib.layers.xavier_initializer_conv2d()), # 18*18*20->16*16*40->8*8*40 'conv2': tf.get_variable('conv2', [5, 5, 96, 256], initializer=tf.contrib.layers.xavier_initializer_conv2d()), # 8*8*40->6*6*60->3*3*60 'conv3': tf.get_variable('conv3', [3, 3, 256, 384], initializer=tf.contrib.layers.xavier_initializer_conv2d()), # 3*3*60->120 'conv4': tf.get_variable('conv4', [3, 3, 384, 384], initializer=tf.contrib.layers.xavier_initializer_conv2d()), 'conv5': tf.get_variable('conv5', [3, 3, 384, 256], initializer=tf.contrib.layers.xavier_initializer_conv2d()), 'fc1': tf.get_variable('fc1', [6 * 6 * 256, 4096], initializer=tf.contrib.layers.xavier_initializer()), 'fc2': tf.get_variable('fc2', [4096, 4096], initializer=tf.contrib.layers.xavier_initializer()), 'fc3': tf.get_variable('fc3', [4096, 1000], initializer=tf.contrib.layers.xavier_initializer()), } with tf.variable_scope("biases"): self.biases = { 'conv1': tf.get_variable('conv1', [96, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv2': tf.get_variable('conv2', [256, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv3': tf.get_variable('conv3', [384, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv4': tf.get_variable('conv4', [384, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'conv5': tf.get_variable('conv5', [256, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'fc1': tf.get_variable('fc1', [4096, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'fc2': tf.get_variable('fc2', [4096, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)), 'fc3': tf.get_variable('fc3', [1000, ], initializer=tf.constant_initializer(value=0.1, dtype=tf.float32)) } # 计算softmax交叉熵损失函数 def sorfmax_loss(self, predicts, labels): predicts = tf.nn.softmax(predicts) labels = tf.one_hot(labels, self.weights['fc3'].get_shape().as_list()[1]) loss = tf.nn.softmax_cross_entropy_with_logits(logits=predicts, labels=labels) # loss =-tf.reduce_mean(labels * tf.log(predicts))# tf.nn.softmax_cross_entropy_with_logits(predicts, labels) self.cost = loss return self.cost # 梯度下降 def optimer(self, loss, lr=0.01): train_optimizer = tf.train.GradientDescentOptimizer(lr).minimize(loss) return train_optimizer #定义训练 # def train(self): create_record('/Users/hanjiarong/Documents/testdata/tfrtrain') # image, label = read_and_decode('train.tfrecords') # batch_image, batch_label = get_batch(image, label, 30) #连接网络 网络训练 net = network() inf = net.inference(x, dropout) loss = net.sorfmax_loss(inf,y) opti = net.optimer(loss) correct_pred = tf.equal(tf.argmax(inf, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # #定义测试 create_test_record('/Users/hanjiarong/Documents/testdata/tfrtest') # image_t, label_t = read_and_decode('test.tfrecords') # batch_test_image, batch_test_label = get_test_batch(image_t, label_t, 50) # # #生成测试 init = tf.initialize_all_variables() with tf.Session() as session: session.run(init) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) max_iter = 100000 iter = 1 print("begin1") while iter * 30 < max_iter: print("begin2") image, label = read_and_decode('train.tfrecords') print("begin3") batch_image, batch_label = get_batch(image, label, 1) print("begin4") batch_image = session.run(batch_image) batch_label = session.run(batch_label) print("begin5") # loss_np, _, label_np, image_np, inf_np = session.run([loss, opti, batch_label, batch_image, inf]) session.run(opti, feed_dict={x: batch_image, y: batch_label, keep_drop: dropout}) print("begin6") if iter % 10 == 0: loss, acc = session.run([loss, accuracy], feed_dict={x: batch_image, y: batch_label, keep_drop: 1.}) print("Iter " + str(iter * 30) + ", Minibatch Loss= " + \ "{:.6f}".format(loss) + ", Training Accuracy= " + "{:.5f}".format(acc)) iter += 1 print("Optimization Finished!") image, label = read_and_decode('test.tfrecords') batch_test_image, batch_test_label = get_batch(image, label, 2) img_test, lab_test = session.run([batch_test_image, batch_test_label]) test_accuracy = session.run(accuracy, feed_dict={x: img_test, y: lab_test, keep_drop: 1.}) print("Testing Accuracy:", test_accuracy) ```

tensorflow 报错You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3],但是怎么看数据都没错,请大神指点

调试googlenet的代码,总是报错 InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3],但是我怎么看喂的数据都没问题,请大神们指点 ``` # -*- coding: utf-8 -*- """ GoogleNet也被称为InceptionNet Created on Mon Feb 10 12:15:35 2020 @author: 月光下的云海 """ import tensorflow as tf from keras.datasets import cifar10 import numpy as np import tensorflow.contrib.slim as slim tf.reset_default_graph() tf.reset_default_graph() (x_train,y_train),(x_test,y_test) = cifar10.load_data() x_train = x_train.astype('float32') x_test = x_test.astype('float32') y_train = y_train.astype('int32') y_test = y_test.astype('int32') y_train = y_train.reshape(y_train.shape[0]) y_test = y_test.reshape(y_test.shape[0]) x_train = x_train/255 x_test = x_test/255 #************************************************ 构建inception ************************************************ #构建一个多分支的网络结构 #INPUTS: # d0_1:最左边的分支,分支0,大小为1*1的卷积核个数 # d1_1:左数第二个分支,分支1,大小为1*1的卷积核的个数 # d1_3:左数第二个分支,分支1,大小为3*3的卷积核的个数 # d2_1:左数第三个分支,分支2,大小为1*1的卷积核的个数 # d2_5:左数第三个分支,分支2,大小为5*5的卷积核的个数 # d3_1:左数第四个分支,分支3,大小为1*1的卷积核的个数 # scope:参数域名称 # reuse:是否重复使用 #*************************************************************************************************************** def inception(x,d0_1,d1_1,d1_3,d2_1,d2_5,d3_1,scope = 'inception',reuse = None): with tf.variable_scope(scope,reuse = reuse): #slim.conv2d,slim.max_pool2d的默认参数都放在了slim的参数域里面 with slim.arg_scope([slim.conv2d,slim.max_pool2d],stride = 1,padding = 'SAME'): #第一个分支 with tf.variable_scope('branch0'): branch_0 = slim.conv2d(x,d0_1,[1,1],scope = 'conv_1x1') #第二个分支 with tf.variable_scope('branch1'): branch_1 = slim.conv2d(x,d1_1,[1,1],scope = 'conv_1x1') branch_1 = slim.conv2d(branch_1,d1_3,[3,3],scope = 'conv_3x3') #第三个分支 with tf.variable_scope('branch2'): branch_2 = slim.conv2d(x,d2_1,[1,1],scope = 'conv_1x1') branch_2 = slim.conv2d(branch_2,d2_5,[5,5],scope = 'conv_5x5') #第四个分支 with tf.variable_scope('branch3'): branch_3 = slim.max_pool2d(x,[3,3],scope = 'max_pool') branch_3 = slim.conv2d(branch_3,d3_1,[1,1],scope = 'conv_1x1') #连接 net = tf.concat([branch_0,branch_1,branch_2,branch_3],axis = -1) return net #*************************************** 使用inception构建GoogleNet ********************************************* #使用inception构建GoogleNet #INPUTS: # inputs-----------输入 # num_classes------输出类别数目 # is_trainning-----batch_norm层是否使用训练模式,batch_norm和is_trainning密切相关 # 当is_trainning = True 时候,它使用一个batch数据的平均移动,方差值 # 当is_trainning = Flase时候,它就使用固定的值 # verbos-----------控制打印信息 # reuse------------是否重复使用 #*************************************************************************************************************** def googlenet(inputs,num_classes,reuse = None,is_trainning = None,verbose = False): with slim.arg_scope([slim.batch_norm],is_training = is_trainning): with slim.arg_scope([slim.conv2d,slim.max_pool2d,slim.avg_pool2d], padding = 'SAME',stride = 1): net = inputs #googlnet的第一个块 with tf.variable_scope('block1',reuse = reuse): net = slim.conv2d(net,64,[5,5],stride = 2,scope = 'conv_5x5') if verbose: print('block1 output:{}'.format(net.shape)) #googlenet的第二个块 with tf.variable_scope('block2',reuse = reuse): net = slim.conv2d(net,64,[1,1],scope = 'conv_1x1') net = slim.conv2d(net,192,[3,3],scope = 'conv_3x3') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block2 output:{}'.format(net.shape)) #googlenet第三个块 with tf.variable_scope('block3',reuse = reuse): net = inception(net,64,96,128,16,32,32,scope = 'inception_1') net = inception(net,128,128,192,32,96,64,scope = 'inception_2') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block3 output:{}'.format(net.shape)) #googlenet第四个块 with tf.variable_scope('block4',reuse = reuse): net = inception(net,192,96,208,16,48,64,scope = 'inception_1') net = inception(net,160,112,224,24,64,64,scope = 'inception_2') net = inception(net,128,128,256,24,64,64,scope = 'inception_3') net = inception(net,112,144,288,24,64,64,scope = 'inception_4') net = inception(net,256,160,320,32,128,128,scope = 'inception_5') net = slim.max_pool2d(net,[3,3],stride = 2,scope = 'max_pool') if verbose: print('block4 output:{}'.format(net.shape)) #googlenet第五个块 with tf.variable_scope('block5',reuse = reuse): net = inception(net,256,160,320,32,128,128,scope = 'inception1') net = inception(net,384,182,384,48,128,128,scope = 'inception2') net = slim.avg_pool2d(net,[2,2],stride = 2,scope = 'avg_pool') if verbose: print('block5 output:{}'.format(net.shape)) #最后一块 with tf.variable_scope('classification',reuse = reuse): net = slim.flatten(net) net = slim.fully_connected(net,num_classes,activation_fn = None,normalizer_fn = None,scope = 'logit') if verbose: print('classification output:{}'.format(net.shape)) return net #给卷积层设置默认的激活函数和batch_norm with slim.arg_scope([slim.conv2d],activation_fn = tf.nn.relu,normalizer_fn = slim.batch_norm) as sc: conv_scope = sc is_trainning_ph = tf.placeholder(tf.bool,name = 'is_trainning') #定义占位符 x_train_ph = tf.placeholder(shape = (None,x_train.shape[1],x_train.shape[2],x_train.shape[3]),dtype = tf.float32) x_test_ph = tf.placeholder(shape = (None,x_test.shape[1],x_test.shape[2],x_test.shape[3]),dtype = tf.float32) y_train_ph = tf.placeholder(shape = (None,),dtype = tf.int32) y_test_ph = tf.placeholder(shape = (None,),dtype = tf.int32) #实例化网络 with slim.arg_scope(conv_scope): train_out = googlenet(x_train_ph,10,is_trainning = is_trainning_ph,verbose = True) val_out = googlenet(x_test_ph,10,is_trainning = is_trainning_ph,reuse = True) #定义loss和acc with tf.variable_scope('loss'): train_loss = tf.losses.sparse_softmax_cross_entropy(labels = y_train_ph,logits = train_out,scope = 'train') val_loss = tf.losses.sparse_softmax_cross_entropy(labels = y_test_ph,logits = val_out,scope = 'val') with tf.name_scope('accurcay'): train_acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(train_out,axis = -1,output_type = tf.int32),y_train_ph),tf.float32)) val_acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(val_out,axis = -1,output_type = tf.int32),y_test_ph),tf.float32)) #定义训练op lr = 1e-2 opt = tf.train.MomentumOptimizer(lr,momentum = 0.9) #通过tf.get_collection获得所有需要更新的op update_op = tf.get_collection(tf.GraphKeys.UPDATE_OPS) #使用tesorflow控制流,先执行update_op再进行loss最小化 with tf.control_dependencies(update_op): train_op = opt.minimize(train_loss) #开启会话 sess = tf.Session() saver = tf.train.Saver() sess.run(tf.global_variables_initializer()) batch_size = 64 #开始训练 for e in range(10000): batch1 = np.random.randint(0,50000,size = batch_size) t_x_train = x_train[batch1][:][:][:] t_y_train = y_train[batch1] batch2 = np.random.randint(0,10000,size = batch_size) t_x_test = x_test[batch2][:][:][:] t_y_test = y_test[batch2] sess.run(train_op,feed_dict = {x_train_ph:t_x_train, is_trainning_ph:True, y_train_ph:t_y_train}) # if(e%1000 == 999): # loss_train,acc_train = sess.run([train_loss,train_acc], # feed_dict = {x_train_ph:t_x_train, # is_trainning_ph:True, # y_train_ph:t_y_train}) # loss_test,acc_test = sess.run([val_loss,val_acc], # feed_dict = {x_test_ph:t_x_test, # is_trainning_ph:False, # y_test_ph:t_y_test}) # print('STEP{}:train_loss:{:.6f} train_acc:{:.6f} test_loss:{:.6f} test_acc:{:.6f}' # .format(e+1,loss_train,acc_train,loss_test,acc_test)) saver.save(sess = sess,save_path = 'VGGModel\model.ckpt') print('Train Done!!') print('--'*60) sess.close() ``` 报错信息是 ``` Using TensorFlow backend. block1 output:(?, 16, 16, 64) block2 output:(?, 8, 8, 192) block3 output:(?, 4, 4, 480) block4 output:(?, 2, 2, 832) block5 output:(?, 1, 1, 1024) classification output:(?, 10) Traceback (most recent call last): File "<ipython-input-1-6385a760fe16>", line 1, in <module> runfile('F:/Project/TEMP/LearnTF/GoogleNet/GoogleNet.py', wdir='F:/Project/TEMP/LearnTF/GoogleNet') File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "F:/Project/TEMP/LearnTF/GoogleNet/GoogleNet.py", line 177, in <module> y_train_ph:t_y_train}) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 900, in run run_metadata_ptr) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1316, in _do_run run_metadata) File "D:\ANACONDA\Anaconda3\envs\spyder\lib\site-packages\tensorflow\python\client\session.py", line 1335, in _do_call raise type(e)(node_def, op, message) InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [?,32,32,3] [[Node: Placeholder_1 = Placeholder[dtype=DT_FLOAT, shape=[?,32,32,3], _device="/job:localhost/replica:0/task:0/device:GPU:0"]()]] [[Node: gradients/block4/inception_4/concat_grad/ShapeN/_45 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_23694_gradients/block4/inception_4/concat_grad/ShapeN", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] ``` 看了好多遍都不是喂数据的问题,百度说是summary出了问题,可是我也没有summary呀,头晕~~~~

Unet图像分割问题求解

Unet经过卷积等操作进行特征提取,再经过上采样后,得到的数据应该是不规则的数据,而期望得到的数据应该是0,1这样的规则数据,那如何用这个数据进行图像分割呢?

InvalidArgumentError: Input to reshape is a tensor with 152000 values, but the requested shape requires a multiple of 576

运行无提示,也没有输出数据,求大神帮助! # -*- coding: utf-8 -*- """ Created on Fri Oct 4 10:01:03 2019 @author: xxj """ import numpy as np from sklearn import preprocessing import tensorflow as tf from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import pandas as pd #读取CSV文件数据 # 从CSV文件中读取数据,返回DataFrame类型的数据集合。 def zc_func_read_csv(): zc_var_dataframe = pd.read_csv("highway.csv", sep=",") # 打乱数据集合的顺序。有时候数据文件有可能是根据某种顺序排列的,会影响到我们对数据的处理。 zc_var_dataframe = zc_var_dataframe.reindex(np.random.permutation(zc_var_dataframe.index)) return zc_var_dataframe # 预处理特征值 def preprocess_features(highway): processed_features = highway[ ["line1","line2","line3","line4","line5", "brige1","brige2","brige3","brige4","brige5", "tunnel1","tunnel2","tunnel3","tunnel4","tunnel5", "inter1","inter2","inter3","inter4","inter5", "econmic1","econmic2","econmic3","econmic4","econmic5"] ] return processed_features # 预处理标签 highway=zc_func_read_csv() x= preprocess_features(highway) outtarget=np.array(pd.read_csv("highway1.csv")) y=np.array(outtarget[:,[0]]) print('##################################################################') # 随机挑选 train_x_disorder, test_x_disorder, train_y_disorder, test_y_disorder = train_test_split(x, y,train_size=0.8, random_state=33) #数据标准化 ss_x = preprocessing.StandardScaler() train_x_disorder = ss_x.fit_transform(train_x_disorder) test_x_disorder = ss_x.transform(test_x_disorder) ss_y = preprocessing.StandardScaler() train_y_disorder = ss_y.fit_transform(train_y_disorder.reshape(-1, 1)) test_y_disorder=ss_y.transform(test_y_disorder.reshape(-1, 1)) #变厚矩阵 def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) #偏置 def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) #卷积处理 变厚过程 def conv2d(x, W): # stride [1, x_movement, y_movement, 1] x_movement、y_movement就是步长 # Must have strides[0] = strides[3] = 1 padding='SAME'表示卷积后长宽不变 return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') #pool 长宽缩小一倍 def max_pool_2x2(x): # stride [1, x_movement, y_movement, 1] return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME') # define placeholder for inputs to network xs = tf.placeholder(tf.float32, [None, 25]) #原始数据的维度:25 ys = tf.placeholder(tf.float32, [None, 1])#输出数据为维度:1 keep_prob = tf.placeholder(tf.float32)#dropout的比例 x_image = tf.reshape(xs, [-1, 5, 5, 1])#原始数据25变成二维图片5*5 ## conv1 layer ##第一卷积层 W_conv1 = weight_variable([2,2, 1,32]) # patch 2x2, in size 1, out size 32,每个像素变成32个像素,就是变厚的过程 b_conv1 = bias_variable([32]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) # output size 2x2x32,长宽不变,高度为32的三维图像 #h_pool1 = max_pool_2x2(h_conv1) # output size 2x2x32 长宽缩小一倍 ## conv2 layer ##第二卷积层 W_conv2 = weight_variable([2,2, 32, 64]) # patch 2x2, in size 32, out size 64 b_conv2 = bias_variable([64]) h_conv2 = tf.nn.relu(conv2d(h_conv1, W_conv2) + b_conv2) #输入第一层的处理结果 输出shape 4*4*64 ## fc1 layer ## full connection 全连接层 W_fc1 = weight_variable([3*3*64, 512])#4x4 ,高度为64的三维图片,然后把它拉成512长的一维数组 b_fc1 = bias_variable([512]) h_pool2_flat = tf.reshape(h_conv2, [-1, 3*3*64])#把3*3,高度为64的三维图片拉成一维数组 降维处理 h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)#把数组中扔掉比例为keep_prob的元素 ## fc2 layer ## full connection W_fc2 = weight_variable([512, 1])#512长的一维数组压缩为长度为1的数组 b_fc2 = bias_variable([1])#偏置 #最后的计算结果 prediction = tf.matmul(h_fc1_drop, W_fc2) + b_fc2 #prediction = tf.nn.relu(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) # 计算 predition与y 差距 所用方法很简单就是用 suare()平方,sum()求和,mean()平均值 cross_entropy = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1])) # 0.01学习效率,minimize(loss)减小loss误差 train_step = tf.train.AdamOptimizer(0.01).minimize(cross_entropy) sess = tf.Session() # important step # tf.initialize_all_variables() no long valid from # 2017-03-02 if using tensorflow >= 0.12 sess.run(tf.global_variables_initializer()) #训练500次 for i in range(100): sess.run(train_step, feed_dict={xs: train_x_disorder, ys: train_y_disorder, keep_prob: 0.7}) print(i,'误差=',sess.run(cross_entropy, feed_dict={xs: train_x_disorder, ys: train_y_disorder, keep_prob: 1.0})) # 输出loss值 # 可视化 prediction_value = sess.run(prediction, feed_dict={xs: test_x_disorder, ys: test_y_disorder, keep_prob: 1.0}) ###画图########################################################################### fig = plt.figure(figsize=(20, 3)) # dpi参数指定绘图对象的分辨率,即每英寸多少个像素,缺省值为80 axes = fig.add_subplot(1, 1, 1) line1,=axes.plot(range(len(prediction_value)), prediction_value, 'b--',label='cnn',linewidth=2) #line2,=axes.plot(range(len(gbr_pridict)), gbr_pridict, 'r--',label='优选参数') line3,=axes.plot(range(len(test_y_disorder)), test_y_disorder, 'g',label='实际') axes.grid() fig.tight_layout() #plt.legend(handles=[line1, line2,line3]) plt.legend(handles=[line1, line3]) plt.title('卷积神经网络') plt.show()

请问如何将h_fc1提取成矩阵保存到本地

import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' import numpy as np import scipy.io as sio import tensorflow.compat.v1 as tf tf.disable_v2_behavior() import xlrd from openpyxl import Workbook # 以交互式方式启动session # 如果不使用交互式session,则在启动session前必须构建整个计算图,才能启动该计算图 #sess = tf.InteractiveSession() data = sio.loadmat('ballfault_DE.mat') sensorlenth=2048*36 condition=4#工况数 classification=10#类别 L=2048#网络输入长度 evfisam_num=int(sensorlenth/L) evfitrain_num=int(evfisam_num*3/4)#每个工况用于训练的样本数 evfitest_num=evfisam_num-evfitrain_num#每个工况用于测试的样本数 div=1 C=4 al=512 evdoctrain_num=condition*(evfitrain_num-1)*C evdoctest_num=condition*evfitest_num#类别数×工况数×每个文件的样本数 batch_num=int(evdoctrain_num/div) train_num=evdoctrain_num*classification test_num=evfitest_num*condition*classification cnn_train=np.zeros((train_num,L)) cnn_test=np.zeros((test_num,L)) sensor_1=data['ballfault'] for i in range(classification*condition): sensor=sensor_1[0:sensorlenth,i] cnn_train_1=sensor[0:L*evfitrain_num] for j in range(C):#数据增强C次 cnn_train[(i*C+j)*(evfitrain_num-1):(i*C+j+1)*(evfitrain_num-1),:]=cnn_train_1[j*al:(evfitrain_num-1)*L+j*al].reshape((evfitrain_num-1),L) cnn_test_1=sensor[L*evfitrain_num:evfisam_num*L] cnn_test[i*evfitest_num:(i+1)*evfitest_num,:]=cnn_test_1[0:evfitest_num*L].reshape(evfitest_num,L) lable_train=np.zeros(train_num) lable_test=np.zeros(test_num) for num_dir in range(0,classification): lable_train[num_dir*evdoctrain_num:(num_dir+1)*evdoctrain_num]=(num_dir+1)*np.ones(evdoctrain_num) lable_test[num_dir*evdoctest_num:(num_dir+1)*evdoctest_num]=(num_dir+1)*np.ones(evdoctest_num) expect_y=np.zeros((train_num,classification)) m=0 for l in lable_train: expect_y[m,int(l-1)]=1 m+=1 test_expect_y=np.zeros((test_num,classification)) m=0 for l in lable_test: test_expect_y[m,int(l-1)]=1 m+=1 merge = np.append(cnn_train,expect_y,axis=1) np.random.shuffle(merge)#tf.random_shuffle(a) cnn_train=merge[:,0:L] expect_y=merge[:,L:L+classification] kernel_length1=16 kernel_length2=10 kernel_length3=8 kernel_length4=6 kernel_length5=16 kernel_length6=10 kernel_length7=8 kernel_length8=6 #L_1=int((L-kernel_length1+1)/4) #L_2=int((L_1-kernel_length2+1)/4) #L_3=int((L_2-kernel_length3+1)/4) B=np.power(2,8) L_end=int(L/B) kernel_num_1=8 kernel_num_2=16 kernel_num_3=9 kernel_num_4=12 kernel_num_5=8 kernel_num_6=16 kernel_num_7=9 kernel_num_8=12 out_num=100 """构建计算图""" # 通过占位符来为输入图像和目标输出类别创建节点 # shape参数是可选的,有了它tensorflow可以自动捕获维度不一致导致的错误 initial_input = tf.placeholder("float", shape=[None, L]) # 原始输入 initial_y = tf.placeholder("float", shape=[None, classification]) # 目标值 # 为了不在建立模型的时候反复做初始化操作, # 我们定义两个函数用于初始化 def weight_variable(shape): # 截尾正态分布,stddev是正态分布的标准偏差 initial = tf.truncated_normal(shape=shape, stddev=0.05) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) # 卷积核池化,步长为1,0边距 def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 1, 2, 1], strides=[1, 1, 2, 1], padding='SAME') """第一层卷积""" # 由一个卷积和一个最大池化组成。滤波器1x16中算出32个特征,是因为使用32个滤波器进行卷积 # 卷积的权重张量形状是[1, 16, 1, 32],1是输入通道的个数,32是输出通道个数 W_conv1 = weight_variable([1, kernel_length1, 1, kernel_num_1]) # 每一个输出通道都有一个偏置量 b_conv1 = bias_variable([kernel_num_1]) # 位了使用卷积,必须将输入转换成4维向量,2、3维表示图片的宽、高 # 最后一维表示图片的颜色通道(因为是灰度图像所以通道数维1,RGB图像通道数为3) process_image = tf.reshape(initial_input, [-1, 1, L, 1]) # 第一层的卷积结果,使用Relu作为激活函数 h_conv1 = tf.nn.relu(conv2d(process_image, W_conv1)+b_conv1) # 第一层卷积后的池化结果 h_pool1 = max_pool_2x2(h_conv1) """第二层卷积""" W_conv2 = weight_variable([1, kernel_length2, kernel_num_1, kernel_num_2]) b_conv2 = bias_variable([kernel_num_2]) h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_2x2(h_conv2) """第三层卷积""" W_conv3 = weight_variable([1, kernel_length3, kernel_num_2, kernel_num_3]) b_conv3 = bias_variable([kernel_num_3]) h_conv3 = tf.nn.relu(conv2d(h_pool2, W_conv3) + b_conv3) h_pool3 = max_pool_2x2(h_conv3) """第四层卷积""" W_conv4 = weight_variable([1, kernel_length4, kernel_num_3, kernel_num_4]) b_conv4 = bias_variable([kernel_num_4]) h_conv4 = tf.nn.relu(conv2d(h_pool3, W_conv4) + b_conv4) h_pool4 = max_pool_2x2(h_conv4) """第五层卷积""" W_conv5 = weight_variable([1, kernel_length5, kernel_num_4, kernel_num_5]) b_conv5 = bias_variable([kernel_num_5]) h_conv5 = tf.nn.relu(conv2d(h_pool4, W_conv5) + b_conv5) h_pool5 = max_pool_2x2(h_conv5) """第六层卷积""" W_conv6 = weight_variable([1, kernel_length6, kernel_num_5, kernel_num_6]) b_conv6 = bias_variable([kernel_num_6]) h_conv6 = tf.nn.relu(conv2d(h_pool5, W_conv6) + b_conv6) h_pool6 = max_pool_2x2(h_conv6) """第七层卷积""" W_conv7 = weight_variable([1, kernel_length7, kernel_num_6, kernel_num_7]) b_conv7 = bias_variable([kernel_num_7]) h_conv7 = tf.nn.relu(conv2d(h_pool6, W_conv7) + b_conv7) h_pool7 = max_pool_2x2(h_conv7) """第八层卷积""" W_conv8 = weight_variable([1, kernel_length8, kernel_num_7, kernel_num_8]) b_conv8 = bias_variable([kernel_num_8]) h_conv8 = tf.nn.relu(conv2d(h_pool7, W_conv8) + b_conv8) h_pool8 = max_pool_2x2(h_conv8) """全连接层""" W_fc1 = weight_variable([int(L_end*kernel_num_8), out_num]) b_fc1 = bias_variable([out_num]) # 将最后的池化层输出张量reshape成一维向量 h_pool8_flat = tf.reshape(h_pool8, [-1, int(L_end*kernel_num_8)]) # 全连接层的输出 h_fc1 = tf.nn.relu(tf.matmul(h_pool8_flat, W_fc1) + b_fc1) """使用Dropout减少过拟合""" # 使用placeholder占位符来表示神经元的输出在dropout中保持不变的概率 # 在训练的过程中启用dropout,在测试过程中关闭dropout keep_prob = tf.placeholder("float") h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) """输出层""" W_fc2 = weight_variable([out_num, classification]) b_fc2 = bias_variable([classification]) # 模型预测输出 yconv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) # 交叉熵损失 cross_entropy_1=tf.reduce_sum(initial_y * yconv,1) cross_entropy = -tf.reduce_sum(tf.log(cross_entropy_1))/train_num # 模型训练,使用AdamOptimizer来做梯度最速下降 train_step = tf.train.AdamOptimizer(0.00015).minimize(cross_entropy) # 正确预测,得到True或False的List correct_prediction = tf.equal(tf.argmax(yconv, 1), tf.argmax(initial_y, 1)) # 将布尔值转化成浮点数,取平均值作为精确度 accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) init=tf.global_variables_initializer() # 迭代优化模型 with tf.Session() as sess: sess.run(init) for i in range(300): k=0 while (k<classification*div): a=cnn_train[k*batch_num:(k+1)*batch_num] b=expect_y[k*batch_num:(k+1)*batch_num] #if (i+1)%10 == 0: #print("test accuracy: %g" % accuracy.eval(feed_dict={initial_input: cnn_test,initial_y: test_expect_y, keep_prob: 1.0})) train_step.run(feed_dict={initial_input: a, initial_y: b, keep_prob: 0.5}) #print(sess.run(cross_entropy,feed_dict={initial_input: a, initial_y: b, keep_prob: 1.0})) k+=1 print("accurate: %g" % sess.run(accuracy,feed_dict={initial_input: a, initial_y: b, keep_prob: 1.0}))

大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了

大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、电子书搜索 对于大部分程序员...

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

springboot+jwt实现token登陆权限认证

一 前言 此篇文章的内容也是学习不久,终于到周末有时间码一篇文章分享知识追寻者的粉丝们,学完本篇文章,读者将对token类的登陆认证流程有个全面的了解,可以动态搭建自己的登陆认证过程;对小项目而已是个轻量级的认证机制,符合开发需求;更多精彩原创内容关注公主号知识追寻者,读者的肯定,就是对作者的创作的最大支持; 二 jwt实现登陆认证流程 用户使用账号和面发出post请求 服务器接受到请求后使用私...

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

程序员写出这样的代码,能不挨骂吗?

当你换槽填坑时,面对一个新的环境。能够快速熟练,上手实现业务需求是关键。但是,哪些因素会影响你快速上手呢?是原有代码写的不够好?还是注释写的不够好?昨夜...

外包程序员的幸福生活

今天给你们讲述一个外包程序员的幸福生活。男主是Z哥,不是在外包公司上班的那种,是一名自由职业者,接外包项目自己干。接下来讲的都是真人真事。 先给大家介绍一下男主,Z哥,老程序员,是我十多年前的老同事,技术大牛,当过CTO,也创过业。因为我俩都爱好喝酒、踢球,再加上住的距离不算远,所以一直也断断续续的联系着,我对Z哥的状况也有大概了解。 Z哥几年前创业失败,后来他开始干起了外包,利用自己的技术能...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

早上躺尸,晚上干活:硅谷科技公司这么流行迟到?

硅谷科技公司上班时间OPEN早已不是什么新鲜事,早九晚五是常态,但有很多企业由于不打卡,员工们10点、11点才“姗姗来迟”的情况也屡见不鲜。 这种灵活的考勤制度为人羡慕,甚至近年来,国内某些互联网企业也纷纷效仿。不过,硅谷普遍弹性的上班制度是怎么由来的呢?这种“流行性迟到”真的有那么轻松、悠哉吗? 《动态规划专题班》 课程试听内容: 动态规划的解题要领 动态规划三大类 求最值/计数/可行性 常...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

终于懂了TCP和UDP协议区别

终于懂了TCP和UDP协议区别

Python爬虫,高清美图我全都要(彼岸桌面壁纸)

爬取彼岸桌面网站较为简单,用到了requests、lxml、Beautiful Soup4

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

Java岗开发3年,公司临时抽查算法,离职后这几题我记一辈子

前几天我们公司做了一件蠢事,非常非常愚蠢的事情。我原以为从学校出来之后,除了找工作有测试外,不会有任何与考试有关的事儿。 但是,天有不测风云,公司技术总监、人事总监两位大佬突然降临到我们事业线,叫上我老大,给我们组织了一场别开生面的“考试”。 那是一个风和日丽的下午,我翘着二郎腿,左手端着一杯卡布奇诺,右手抓着我的罗技鼠标,滚动着轮轴,穿梭在头条热点之间。 “淡黄的长裙~蓬松的头发...

大胆预测下未来5年的Web开发

在2019年的ReactiveConf 上,《Elm in Action》的作者Richard Feldman对未来5年Web开发的发展做了预测,很有意思,分享给大家。如果你有机会从头...

大牛都会用的IDEA调试技巧!!!

导读 前天面试了一个985高校的实习生,问了他平时用什么开发工具,他想也没想的说IDEA,于是我抛砖引玉的问了一下IDEA的调试用过吧,你说说怎么设置断点...

都前后端分离了,咱就别做页面跳转了!统统 JSON 交互

文章目录1. 无状态登录1.1 什么是有状态1.2 什么是无状态1.3 如何实现无状态1.4 各自优缺点2. 登录交互2.1 前后端分离的数据交互2.2 登录成功2.3 登录失败3. 未认证处理方案4. 注销登录 这是本系列的第四篇,有小伙伴找不到之前文章,松哥给大家列一个索引出来: 挖一个大坑,Spring Security 开搞! 松哥手把手带你入门 Spring Security,别再问密...

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

C 语言编程 — 堆栈与内存管理

目录 文章目录目录前文列表堆、栈内存管理动态分配内存重新调整内存的大小和释放内存 前文列表 《程序编译流程与 GCC 编译器》 《C 语言编程 — 基本语法》 《C 语言编程 — 基本数据类型》 《C 语言编程 — 变量与常量》 《C 语言编程 — 运算符》 《C 语言编程 — 逻辑控制语句》 《C 语言编程 — 函数》 《C 语言编程 — 高级数据类型 — 指针》 《C 语言编程 — 高级数据类...

实时更新:计算机编程语言排行榜—TIOBE世界编程语言排行榜(2020年6月份最新版)

内容导航: 1、TIOBE排行榜 2、总榜(2020年6月份) 3、本月前三名 3.1、C 3.2、Java 3.3、Python 4、学习路线图 5、参考地址 1、TIOBE排行榜 TIOBE排行榜是根据全世界互联网上有经验的程序员、课程和第三方厂商的数量,并使用搜索引擎(如Google、Bing、Yahoo!)以及Wikipedia、Amazon、YouTube统计出排名数据。

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

2020春招面试了10多家大厂,我把问烂了的数据库事务知识点总结了一下

2020年截止目前,我面试了阿里巴巴、腾讯、美团、拼多多、京东、快手等互联网大厂。我发现数据库事务在面试中出现的次数非常多。

前端还能这么玩?(女朋友生日,用前端写了一个好玩的送给了她,高兴坏了)

前端还能这么玩?(女朋友生日,用前端写了一个好玩的送给了她,高兴坏了)

立即提问
相关内容推荐