如何解决cannot import name 'dense_features' from 'tensorflow.python.feature_column' 5C

出现了cannot import name 'dense_features' from 'tensorflow.python.feature_column'的问题,tensorflow是1.14.0版本,尝试过重新安装,无法解决,安装的其他package如下
图片说明

1个回答

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
VS2017和openvc2.4.0的调试问题 无法查找或打开 PDB 文件
“dense_inference.exe”(Win32): 已加载“E:\dense_inference\dense_inference\opencv_core249.dll”。无法查找或打开 PDB 文件。 “dense_inference.exe”(Win32): 已卸载“E:\dense_inference\dense_inference\opencv_core249.dll” ![图片说明](https://img-ask.csdn.net/upload/201903/07/1551962949_584023.png)
LSTM模型增加了实验的输入样本数量,损失函数的变化如两图所示
![图片说明](https://img-ask.csdn.net/upload/201903/05/1551784615_673147.png) 上图是1000个样本训练模型的损失函数(mape 平均绝对百分误差)结果,下图为样本数量增加到2000个,损失函数就看不懂了,每个epoch下降到80左右就上跳到一个较大的值,尤其是当mape在80多的时候accracy都为0。然后开始下降 ![图片说明](https://img-ask.csdn.net/upload/201903/05/1551784714_123808.png) 附上模型代码 ``` ### model taxi_id = Input(shape=(50, 1)) mask_1 = Masking(mask_value=0)(taxi_id) embedding_1 = Embedding(15000, 14, mask_zero=True)(mask_1) time_id = Input(shape=(50, 1)) mask_2 = Masking(mask_value=0)(time_id) embedding_2 = Embedding(1440, 6, mask_zero=True)(mask_2) busy = Input(shape=(50, 1)) mask_3 = Masking(mask_value=0)(busy) embedding_3 = Embedding(2, 2, mask_zero=True)(mask_3) concatenate_1 = Concatenate(axis=3)([embedding_1,embedding_2,embedding_3]) concatenate_1 = Lambda(dim_squeeze)(concatenate_1) num_input = Input(shape=(50, 3)) mask_4 = Masking(mask_value=0, input_shape=())(num_input) concatenate_2 = Concatenate(axis=2)([concatenate_1, mask_4]) blstm_1 = Bidirectional(LSTM(128, activation='tanh', return_sequences=True, dropout=0.2))(concatenate_2) blstm_2 = Bidirectional(LSTM(256, activation='tanh', return_sequences=True, dropout=0.2))(blstm_1) blstm_3 = Bidirectional(LSTM(128, activation='tanh', return_sequences=True, dropout=0.2))(blstm_2) dense_1 = Dense(128)(blstm_3) leaky_relu_1 = advanced_activations.LeakyReLU(alpha=0.3)(dense_1) dense_2 = Dense(128)(leaky_relu_1) leaky_relu_2 = advanced_activations.LeakyReLU(alpha=0.3)(dense_2) dense_3 = Dense(128)(leaky_relu_2) leaky_relu_3 = advanced_activations.LeakyReLU(alpha=0.3)(dense_3) dense_4 = Dense(128)(leaky_relu_3) leaky_relu_4 = advanced_activations.LeakyReLU(alpha=0.3)(dense_4) add_1 = add([leaky_relu_1, leaky_relu_2, leaky_relu_3, leaky_relu_4]) dense_5 = Dense(1, activation='linear')(add_1) dense_5 = Lambda(dim_squeeze)(dense_5) dense_5 = Dense(units = 1, activation='linear')(dense_5) model = Model([taxi_id, time_id, busy, num_input], dense_5) ``` 求大佬过目指点迷津,这样的损失函数意味着哪里出了问题,先行谢过
运行结果如下:train(generator,discriminator,gan_model,latent_dim) NameError: name 'train' is not defined,请问如何解决
import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from numpy import hstack from numpy import zeros from numpy import ones from numpy.random import rand from numpy.random import randn from keras.models import Sequential from keras.layers import Dense def define_discriminator(n_inputs=2): model=Sequential() model.add(Dense(25, activation='relu',kernel_initializer='he_uniform',input_dim=n_inputs)) model.add(Dense(1,activation='sigmoid')) model.compile(loss='binary_crossentropy',optimizer='adam', metrics=['accuracy']) return model def define_generator(latent_dim,n_outputs=2): model=Sequential() model.add(Dense(15, activation='relu',kernel_initializer='he_uniform', input_dim=latent_dim)) model.add(Dense(n_outputs,activation='linear')) return model def define_gan(generator,discriminator): discriminator.trainable=False model=Sequential() model.add(generator) model.add(discriminator) model.compile(loss='binary_crossentropy',optimizer='adam') return model def generate_real_samples(n): x1=rand(n)-0.5 x2=x1*x1 x1=x1.reshape(n,1) x2=x2.reshape(n,1) x=hstack((x1,x2)) y=ones((n,1)) return x,y def generate_latent_points(latent_dim,n): x_input=randn(latent_dim*n) x_input=x_input.reshape(n,latent_dim) return x_input def generate_fake_samples(generator,latent_dim,n): x_input=generate_latent_points(latent_dim,n) x=generator.predict(x_input) y=zeros((n,1)) return x,y def summarize_performance(epoch,generator,discriminator,latent_dim,n=100): x_real,y_real=generate_real_samples(n) _,acc_real=discriminator.evaluate(x_real,y_real,verbose=0) x_fake, y_fake = generate_fake_samples(generator,latent_dim,n) _, acc_fake = discriminator.evaluate(x_fake, y_fake, verbose=0) print(epoch,acc_real,acc_fake) plt.scatter(x_real[:,0],x_real[:,1],color='red') plt.scatter(x_fake[:, 0], x_fake[:, 1], color='blue') plt.show() def train(g_model,d_model,gan_model,latent_dim,n_epochs=10000,n_batch=128,n_eval=2000): half_batch=int(n_batch/2) for i in range(n_epochs): x_real,y_real=generate_real_samples(half_batch) x_fake,y_fake=generate_fake_samples(g_model,latent_dim,half_batch) d_model.train_on_batch(x_real,y_real) d_model.train_on_batch(x_fake, y_fake) x_gan=generate_latent_points(latent_dim,n_batch) y_gan=ones((n_batch,1)) gan_model.train_on_batch(x_gan,y_gan) if(i+1)%n_epochs==0: summarize_performance(i,g_model,d_model,latent_dim) latent_dim=5 discriminator=define_discriminator() generator=define_generator(latent_dim) gan_model=define_gan(generator,discriminator) train(generator,discriminator,gan_model,latent_dim) 问题
keras模型输出层希望输出的shape是(None,1)但我写的模型输出一个三维的shape(None,10,1)
![图片说明](https://img-ask.csdn.net/upload/201902/15/1550245551_523600.jpg) ![图片说明](https://img-ask.csdn.net/upload/201902/15/1550245621_599620.png) ``` ### model taxi_id = Input(shape=(10, 1)) mask_1 = Masking(mask_value=0)(taxi_id) embedding_1 = Embedding(15000, 14, mask_zero=True)(mask_1) time_id = Input(shape=(10, 1)) mask_2 = Masking(mask_value=0)(time_id) embedding_2 = Embedding(7, 4, mask_zero=True)(mask_2) busy = Input(shape=(10, 1)) mask_3 = Masking(mask_value=0)(busy) embedding_3 = Embedding(2, 2, mask_zero=True)(mask_3) concatenate_1 = Concatenate(axis=3)([embedding_1,embedding_2,embedding_3]) concatenate_1 = Lambda(dim_squeeze)(concatenate_1) num_input = Input(shape=(10, 3)) mask_4 = Masking(mask_value=0, input_shape=())(num_input) concatenate_2 = Concatenate(axis=2)([concatenate_1, mask_4]) blstm_1 = Bidirectional(LSTM(64, activation='tanh', return_sequences=True, dropout=0.2, recurrent_dropout=0.2))(concatenate_2) blstm_2 = Bidirectional(LSTM(128, activation='tanh', return_sequences=True, dropout=0.2, recurrent_dropout=0.2))(blstm_1) blstm_3 = Bidirectional(LSTM(64, activation='tanh', return_sequences=True, dropout=0.2, recurrent_dropout=0.2))(blstm_2) dense_1 = Dense(128)(blstm_3) leaky_relu_1 = advanced_activations.LeakyReLU(alpha=0.3)(dense_1) dense_2 = Dense(128)(leaky_relu_1) leaky_relu_2 = advanced_activations.LeakyReLU(alpha=0.3)(dense_2) dense_3 = Dense(128)(leaky_relu_2) leaky_relu_3 = advanced_activations.LeakyReLU(alpha=0.3)(dense_3) dense_4 = Dense(128)(leaky_relu_3) leaky_relu_4 = advanced_activations.LeakyReLU(alpha=0.3)(dense_4) add_1 = add([leaky_relu_1, leaky_relu_2, leaky_relu_3, leaky_relu_4]) dense_5 = Dense(1)(add_1) model = Model([taxi_id, time_id, busy, num_input], dense_5) ``` 求教大佬该怎么写能把输出维度降下来
python3.7中的tensorflow2.0模块没有的问题。
小白刚做手写字识别,遇到tensorflow导入模块的一些问题,模块ModuleNotFoundError: No module named 'tensorflow.examples.tutorials'不会解决。 import keras # 导入Keras import numpy as np from keras.datasets import mnist # 从keras中导入mnist数据集 from keras.models import Sequential # 导入序贯模型 from keras.layers import Dense # 导入全连接层 from keras.optimizers import SGD # 导入优化函数 from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data", one_hot = True) ![图片说明](https://img-ask.csdn.net/upload/201911/17/1573957701_315782.png) 在网上找了好久,也不怎么懂,能告诉我详实点的解决办法。
吴恩达深度学习第四课第四周fr_utils.py报错,有人遇到过吗
Face Recognition/fr_utils.py, Line21中_get_session()和Line140中model无法找到引用,请问这是什么原因 加载模型时候会报如下错误: Using TensorFlow backend. 2018-08-26 21:30:53.046324: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 Total Params: 3743280 Traceback (most recent call last): File "C:/Users/51530/PycharmProjects/DL/wuenda/Face/faceV3.py", line 60, in <module> load_weights_from_FaceNet(FRmodel) File "C:\Users\51530\PycharmProjects\DL\wuenda\Face\fr_utils.py", line 133, in load_weights_from_FaceNet weights_dict = load_weights() File "C:\Users\51530\PycharmProjects\DL\wuenda\Face\fr_utils.py", line 154, in load_weights conv_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) File "E:\anaconda\lib\site-packages\numpy\lib\npyio.py", line 1867, in genfromtxt raise ValueError(errmsg) ValueError: Some errors were detected ! Line #7 (got 2 columns instead of 1) Line #12 (got 3 columns instead of 1) Line #15 (got 2 columns instead of 1) 具体此文件: ``` #### PART OF THIS CODE IS USING CODE FROM VICTOR SY WANG: https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/utils.py #### import tensorflow as tf import numpy as np import os import cv2 from numpy import genfromtxt from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate from keras.models import Model from keras.layers.normalization import BatchNormalization from keras.layers.pooling import MaxPooling2D, AveragePooling2D import h5py import matplotlib.pyplot as plt _FLOATX = 'float32' def variable(value, dtype=_FLOATX, name=None): v = tf.Variable(np.asarray(value, dtype=dtype), name=name) _get_session().run(v.initializer) return v def shape(x): return x.get_shape() def square(x): return tf.square(x) def zeros(shape, dtype=_FLOATX, name=None): return variable(np.zeros(shape), dtype, name) def concatenate(tensors, axis=-1): if axis < 0: axis = axis % len(tensors[0].get_shape()) return tf.concat(axis, tensors) def LRN2D(x): return tf.nn.lrn(x, alpha=1e-4, beta=0.75) def conv2d_bn(x, layer=None, cv1_out=None, cv1_filter=(1, 1), cv1_strides=(1, 1), cv2_out=None, cv2_filter=(3, 3), cv2_strides=(1, 1), padding=None): num = '' if cv2_out == None else '1' tensor = Conv2D(cv1_out, cv1_filter, strides=cv1_strides, data_format='channels_first', name=layer+'_conv'+num)(x) tensor = BatchNormalization(axis=1, epsilon=0.00001, name=layer+'_bn'+num)(tensor) tensor = Activation('relu')(tensor) if padding == None: return tensor tensor = ZeroPadding2D(padding=padding, data_format='channels_first')(tensor) if cv2_out == None: return tensor tensor = Conv2D(cv2_out, cv2_filter, strides=cv2_strides, data_format='channels_first', name=layer+'_conv'+'2')(tensor) tensor = BatchNormalization(axis=1, epsilon=0.00001, name=layer+'_bn'+'2')(tensor) tensor = Activation('relu')(tensor) return tensor WEIGHTS = [ 'conv1', 'bn1', 'conv2', 'bn2', 'conv3', 'bn3', 'inception_3a_1x1_conv', 'inception_3a_1x1_bn', 'inception_3a_pool_conv', 'inception_3a_pool_bn', 'inception_3a_5x5_conv1', 'inception_3a_5x5_conv2', 'inception_3a_5x5_bn1', 'inception_3a_5x5_bn2', 'inception_3a_3x3_conv1', 'inception_3a_3x3_conv2', 'inception_3a_3x3_bn1', 'inception_3a_3x3_bn2', 'inception_3b_3x3_conv1', 'inception_3b_3x3_conv2', 'inception_3b_3x3_bn1', 'inception_3b_3x3_bn2', 'inception_3b_5x5_conv1', 'inception_3b_5x5_conv2', 'inception_3b_5x5_bn1', 'inception_3b_5x5_bn2', 'inception_3b_pool_conv', 'inception_3b_pool_bn', 'inception_3b_1x1_conv', 'inception_3b_1x1_bn', 'inception_3c_3x3_conv1', 'inception_3c_3x3_conv2', 'inception_3c_3x3_bn1', 'inception_3c_3x3_bn2', 'inception_3c_5x5_conv1', 'inception_3c_5x5_conv2', 'inception_3c_5x5_bn1', 'inception_3c_5x5_bn2', 'inception_4a_3x3_conv1', 'inception_4a_3x3_conv2', 'inception_4a_3x3_bn1', 'inception_4a_3x3_bn2', 'inception_4a_5x5_conv1', 'inception_4a_5x5_conv2', 'inception_4a_5x5_bn1', 'inception_4a_5x5_bn2', 'inception_4a_pool_conv', 'inception_4a_pool_bn', 'inception_4a_1x1_conv', 'inception_4a_1x1_bn', 'inception_4e_3x3_conv1', 'inception_4e_3x3_conv2', 'inception_4e_3x3_bn1', 'inception_4e_3x3_bn2', 'inception_4e_5x5_conv1', 'inception_4e_5x5_conv2', 'inception_4e_5x5_bn1', 'inception_4e_5x5_bn2', 'inception_5a_3x3_conv1', 'inception_5a_3x3_conv2', 'inception_5a_3x3_bn1', 'inception_5a_3x3_bn2', 'inception_5a_pool_conv', 'inception_5a_pool_bn', 'inception_5a_1x1_conv', 'inception_5a_1x1_bn', 'inception_5b_3x3_conv1', 'inception_5b_3x3_conv2', 'inception_5b_3x3_bn1', 'inception_5b_3x3_bn2', 'inception_5b_pool_conv', 'inception_5b_pool_bn', 'inception_5b_1x1_conv', 'inception_5b_1x1_bn', 'dense_layer' ] conv_shape = { 'conv1': [64, 3, 7, 7], 'conv2': [64, 64, 1, 1], 'conv3': [192, 64, 3, 3], 'inception_3a_1x1_conv': [64, 192, 1, 1], 'inception_3a_pool_conv': [32, 192, 1, 1], 'inception_3a_5x5_conv1': [16, 192, 1, 1], 'inception_3a_5x5_conv2': [32, 16, 5, 5], 'inception_3a_3x3_conv1': [96, 192, 1, 1], 'inception_3a_3x3_conv2': [128, 96, 3, 3], 'inception_3b_3x3_conv1': [96, 256, 1, 1], 'inception_3b_3x3_conv2': [128, 96, 3, 3], 'inception_3b_5x5_conv1': [32, 256, 1, 1], 'inception_3b_5x5_conv2': [64, 32, 5, 5], 'inception_3b_pool_conv': [64, 256, 1, 1], 'inception_3b_1x1_conv': [64, 256, 1, 1], 'inception_3c_3x3_conv1': [128, 320, 1, 1], 'inception_3c_3x3_conv2': [256, 128, 3, 3], 'inception_3c_5x5_conv1': [32, 320, 1, 1], 'inception_3c_5x5_conv2': [64, 32, 5, 5], 'inception_4a_3x3_conv1': [96, 640, 1, 1], 'inception_4a_3x3_conv2': [192, 96, 3, 3], 'inception_4a_5x5_conv1': [32, 640, 1, 1,], 'inception_4a_5x5_conv2': [64, 32, 5, 5], 'inception_4a_pool_conv': [128, 640, 1, 1], 'inception_4a_1x1_conv': [256, 640, 1, 1], 'inception_4e_3x3_conv1': [160, 640, 1, 1], 'inception_4e_3x3_conv2': [256, 160, 3, 3], 'inception_4e_5x5_conv1': [64, 640, 1, 1], 'inception_4e_5x5_conv2': [128, 64, 5, 5], 'inception_5a_3x3_conv1': [96, 1024, 1, 1], 'inception_5a_3x3_conv2': [384, 96, 3, 3], 'inception_5a_pool_conv': [96, 1024, 1, 1], 'inception_5a_1x1_conv': [256, 1024, 1, 1], 'inception_5b_3x3_conv1': [96, 736, 1, 1], 'inception_5b_3x3_conv2': [384, 96, 3, 3], 'inception_5b_pool_conv': [96, 736, 1, 1], 'inception_5b_1x1_conv': [256, 736, 1, 1], } def load_weights_from_FaceNet(FRmodel): # Load weights from csv files (which was exported from Openface torch model) weights = WEIGHTS weights_dict = load_weights() # Set layer weights of the model for name in weights: if FRmodel.get_layer(name) != None: FRmodel.get_layer(name).set_weights(weights_dict[name]) elif model.get_layer(name) != None: model.get_layer(name).set_weights(weights_dict[name]) def load_weights(): # Set weights path dirPath = './weights' fileNames = filter(lambda f: not f.startswith('.'), os.listdir(dirPath)) paths = {} weights_dict = {} for n in fileNames: paths[n.replace('.csv', '')] = dirPath + '/' + n for name in WEIGHTS: if 'conv' in name: conv_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) conv_w = np.reshape(conv_w, conv_shape[name]) conv_w = np.transpose(conv_w, (2, 3, 1, 0)) conv_b = genfromtxt(paths[name + '_b'], delimiter=',', dtype=None) weights_dict[name] = [conv_w, conv_b] elif 'bn' in name: bn_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None) bn_b = genfromtxt(paths[name + '_b'], delimiter=',', dtype=None) bn_m = genfromtxt(paths[name + '_m'], delimiter=',', dtype=None) bn_v = genfromtxt(paths[name + '_v'], delimiter=',', dtype=None) weights_dict[name] = [bn_w, bn_b, bn_m, bn_v] elif 'dense' in name: dense_w = genfromtxt(dirPath+'/dense_w.csv', delimiter=',', dtype=None) dense_w = np.reshape(dense_w, (128, 736)) dense_w = np.transpose(dense_w, (1, 0)) dense_b = genfromtxt(dirPath+'/dense_b.csv', delimiter=',', dtype=None) weights_dict[name] = [dense_w, dense_b] return weights_dict def load_dataset(): train_dataset = h5py.File('datasets/train_happy.h5', "r") train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels test_dataset = h5py.File('datasets/test_happy.h5', "r") test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels classes = np.array(test_dataset["list_classes"][:]) # the list of classes train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0])) test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0])) return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes def img_to_encoding(image_path, model): img1 = cv2.imread(image_path, 1) img = img1[...,::-1] img = np.around(np.transpose(img, (2,0,1))/255.0, decimals=12) x_train = np.array([img]) embedding = model.predict_on_batch(x_train) return embedding ```
C++调用python 控制台可以成功,mfc失败,python脚本里依赖tensorflow
x64控制台与MFC控制台同样的配置; 关键C++代码如下: ``` #define PY_modePath L"E:\\Anaconda\\envs\\asr\\" ``` Py_SetPythonHome(PY_modePath); pModule = PyImport_ImportModule(aasr.c_str());//mfc是null 控制台是OK的 python代码如下: ``` #!/usr/bin/env python3 # -*- coding: utf-8 -*- ``` """ @author: sly """ import platform as plat import os import time from general_function.file_wav import * from general_function.file_dict import * from general_function.gen_func import * import numpy as np import random from keras.models import Sequential, Model from keras.layers import Dense, Dropout, Input, Reshape, BatchNormalization # , Flatten from keras.layers import Lambda, TimeDistributed, Activation,Conv2D, MaxPooling2D #, Merge from keras import backend as K from keras.optimizers import SGD, Adadelta, Adam ``` ``` 路径检查多边没有问题. 对边了加载脚本时C++输出:
tensorflow.python.framework.errors_impl.InvalidArgumentError
tensorflow.python.framework.errors_impl.InvalidArgumentError: Key: image/encoded. Can't parse serialized Example. [[Node: ParseSingleExample/ParseSingleExample = ParseSingleExample[Tdense=[DT_STRING, DT_INT64], dense_keys=["image/encoded", "image/label"], dense_shapes=[[8], []], num_sparse=0, sparse_keys=[], sparse_types=[]](arg0, ParseSingleExample/Const, ParseSingleExample/Const_1)]] [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,8,?,?,3], [?]], output_types=[DT_FLOAT, DT_INT64], _device="/job:localhost/replica:0/task:0/device:CPU:0"](Iterator)]] 请问这是什么原因啊?是我的数据集不对吗?
Tensorflow 2.0 : When using data tensors as input to a model, you should specify the `steps_per_epoch` argument.
下面代码每次执行到epochs 中的最后一个step 都会报错,请教大牛这是什么问题呢? ``` import tensorflow_datasets as tfds dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True) train_dataset,test_dataset = dataset['train'],dataset['test'] tokenizer = info.features['text'].encoder print('vocabulary size: ', tokenizer.vocab_size) sample_string = 'Hello world, tensorflow' tokenized_string = tokenizer.encode(sample_string) print('tokened id: ', tokenized_string) src_string= tokenizer.decode(tokenized_string) print(src_string) for t in tokenized_string: print(str(t) + ': '+ tokenizer.decode([t])) BUFFER_SIZE=6400 BATCH_SIZE=64 num_train_examples = info.splits['train'].num_examples num_test_examples=info.splits['test'].num_examples print("Number of training examples: {}".format(num_train_examples)) print("Number of test examples: {}".format(num_test_examples)) train_dataset=train_dataset.shuffle(BUFFER_SIZE) train_dataset=train_dataset.padded_batch(BATCH_SIZE,train_dataset.output_shapes) test_dataset=test_dataset.padded_batch(BATCH_SIZE,test_dataset.output_shapes) def get_model(): model=tf.keras.Sequential([ tf.keras.layers.Embedding(tokenizer.vocab_size,64), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)), tf.keras.layers.Dense(64,activation='relu'), tf.keras.layers.Dense(1,activation='sigmoid') ]) return model model =get_model() model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) import math #from tensorflow import keras #train_dataset= keras.preprocessing.sequence.pad_sequences(train_dataset, maxlen=BUFFER_SIZE) history =model.fit(train_dataset, epochs=2, steps_per_epoch=(math.ceil(BUFFER_SIZE/BATCH_SIZE) -90 ), validation_data= test_dataset) ``` Train on 10 steps Epoch 1/2 9/10 [==========================>...] - ETA: 3s - loss: 0.6955 - accuracy: 0.4479 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-111-8ddec076c096> in <module> 6 epochs=2, 7 steps_per_epoch=(math.ceil(BUFFER_SIZE/BATCH_SIZE) -90 ), ----> 8 validation_data= test_dataset) /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 726 max_queue_size=max_queue_size, 727 workers=workers, --> 728 use_multiprocessing=use_multiprocessing) 729 730 def evaluate(self, /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs) 672 validation_steps=validation_steps, 673 validation_freq=validation_freq, --> 674 steps_name='steps_per_epoch') 675 676 def evaluate(self, /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 437 validation_in_fit=True, 438 prepared_feed_values_from_dataset=(val_iterator is not None), --> 439 steps_name='validation_steps') 440 if not isinstance(val_results, list): 441 val_results = [val_results] /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 174 if not is_dataset: 175 num_samples_or_steps = _get_num_samples_or_steps(ins, batch_size, --> 176 steps_per_epoch) 177 else: 178 num_samples_or_steps = steps_per_epoch /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in _get_num_samples_or_steps(ins, batch_size, steps_per_epoch) 491 return steps_per_epoch 492 return training_utils.check_num_samples(ins, batch_size, steps_per_epoch, --> 493 'steps_per_epoch') 494 495 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_num_samples(ins, batch_size, steps, steps_name) 422 raise ValueError('If ' + steps_name + 423 ' is set, the `batch_size` must be None.') --> 424 if check_steps_argument(ins, steps, steps_name): 425 return None 426 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_steps_argument(input_data, steps, steps_name) 1199 raise ValueError('When using {input_type} as input to a model, you should' 1200 ' specify the `{steps_name}` argument.'.format( -> 1201 input_type=input_type_str, steps_name=steps_name)) 1202 return True 1203 ValueError: When using data tensors as input to a model, you should specify the `steps_per_epoch` argument.
关于使用深度强化学习Actor-Critic算法玩gym库中CartPole游戏不收敛的问题,高分悬赏。
小弟最近在自学深度强化学习,看的莫烦大佬的视频。其中有一个用AC算法玩gym库中CartPole的游戏实例,自己写的代码不知为何不能够收敛。考虑到自己自己写的程序中将AC网络写到一个类里去了,尝试过在A网络训练时截断C网络的梯度反向传播防止干扰,但还是不收敛。 小弟小白初学者自己瞎琢磨的,实在找不出原因,高分悬赏,希望大佬们能解惑。代码如下,其中有两个文件,一个是用以运行的主程序,另一个是主程序要调用的类,大佬们跑一下试试。 另外,真心诚意提问,请勿复制粘贴答非所问。 ``` ########主程序:AC_RL_run_this########## import gym from AC_RL_brain import ACNetwork def run_game(): step = 0 for episode in range(100000): episode_reward = 0 observation = env.reset() while True: if episode_reward > 20: env.render() action = RL.choose_action(observation) observation_, reward, done, _ = env.step(action) if done: reward = -20 RL.C_learn(observation, reward, observation_) RL.A_learn(observation, action) episode_reward += reward if done: break observation = observation_ step += 1 print('%d回合总回报:%f' % (episode, episode_reward)) print('game over') env.close() if __name__ == '__main__': env = gym.make('CartPole-v0') env.seed(1) RL = ACNetwork( n_actions=env.action_space.n, n_features=env.observation_space.shape[0], gamma=0.95, A_lr=0.001, C_lr=0.01, ) run_game() ########需要调用的类:AC_RL_brain########## import tensorflow as tf import numpy as np np.random.seed(2) tf.set_random_seed(2) # reproducible class ACNetwork: def __init__( self, n_actions, n_features, gamma, A_lr, C_lr, ): self.n_actions = n_actions self.n_features = n_features self.gamma = gamma self.A_lr = A_lr self.C_lr = C_lr self.td_error_real = 0 self._build_net() self.sess = tf.Session() self.sess.run(tf.global_variables_initializer()) def _build_net(self): # placeholder self.s = tf.placeholder(tf.float32, [1, self.n_features], "state") self.v_ = tf.placeholder(tf.float32, [1, 1], "v_next") self.r = tf.placeholder(tf.float32, None, 'r') self.a = tf.placeholder(tf.int32, None, "act") # A_net l1_A = tf.layers.dense( inputs=self.s, units=20, # number of hidden units activation=tf.nn.relu, kernel_initializer=tf.random_normal_initializer(0., .1), # weights bias_initializer=tf.constant_initializer(0.1), # biases ) self.acts_prob = tf.layers.dense( inputs=l1_A, units=self.n_actions, # output units activation=tf.nn.softmax, # get action probabilities kernel_initializer=tf.random_normal_initializer(0., .1), # weights bias_initializer=tf.constant_initializer(0.1), # biases ) self.log_prob = tf.log(self.acts_prob[0, self.a]) self.exp_v = tf.reduce_mean(self.log_prob * self.td_error_real) # advantage (TD_error) guided loss self.train_op_A = tf.train.AdamOptimizer(self.A_lr).minimize(-self.exp_v) # minimize(-exp_v) = maximize(exp_v) # C_net l1_C = tf.layers.dense( inputs=self.s, units=20, # number of hidden units activation=tf.nn.relu, # None # have to be linear to make sure the convergence of actor. # But linear approximator seems hardly learns the correct Q. kernel_initializer=tf.random_normal_initializer(0., .1), # weights bias_initializer=tf.constant_initializer(0.1), # biases ) self.v = tf.layers.dense( inputs=l1_C, units=1, # output units activation=None, kernel_initializer=tf.random_normal_initializer(0., .1), # weights bias_initializer=tf.constant_initializer(0.1), # biases ) self.td_error = self.r + self.gamma * self.v_ - self.v self.loss = tf.square(self.td_error) # TD_error = (r+gamma*V_next) - V_eval self.train_op_C = tf.train.AdamOptimizer(self.C_lr).minimize(self.loss) def choose_action(self, s): s = s[np.newaxis, :] probs = self.sess.run(self.acts_prob, {self.s: s}) # get probabilities for all actions return np.random.choice(np.arange(probs.shape[1]), p=probs.ravel()) # return a int def A_learn(self, s, a): s = s[np.newaxis, :] feed_dict = {self.s: s, self.a: a} _, exp_v = self.sess.run([self.train_op_A, self.exp_v], feed_dict) def C_learn(self, s, r, s_): s, s_ = s[np.newaxis, :], s_[np.newaxis, :] v_ = self.sess.run(self.v, {self.s: s_}) self.td_error_real, _ = self.sess.run([self.td_error, self.train_op_C], {self.s: s, self.v_: v_, self.r: r}) ```
怎么用sql语句对学生成绩进行排名?
一张学生成绩表assess_score, 有total_score(总成绩),id,student_name(姓名),student_number(学号), assess_year(学年),根据总成绩进行成绩排名,需要考虑重复成绩和每个学年的成绩排名 这是我写的 请大佬帮我改一下 ``` SELECT assess_year,student_number,id,SUM(courseScore),DENSE_RANK() OVER(ORDER BY SUM(courseScore) DESC ) as Ranking FROM course_score As courseScore group by student_number,assess_year,id ```
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误,查了一下说是gpu被占用了,从下面这里开始出问题的: ``` 2019-10-17 09:28:49.495166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6382 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) (60000, 28, 28) (60000, 10) 2019-10-17 09:28:51.275415: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cublas64_100.dll'; dlerror: cublas64_100.dll not found ``` ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277238_292620.png) 最后显示的问题: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277311_655722.png) 试了一下网上的方法,比如加代码: ``` gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) ``` 但最后提示: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277460_72752.png) 现在不知道要怎么解决了。新手想试下简单的数字识别,步骤也是按教程一步步来的,可能用的版本和教程不一样,我用的是刚下的:2.0tensorflow和以下: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277627_439100.png) 不知道会不会有版本问题,现在紧急求助各位大佬,还有没有其它可以尝试的方法。测试程序加法运算可以执行,数字识别图片运行的时候我看了下,GPU最大占有率才0.2%,下面是完整数字图片识别代码: ``` import os import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers, optimizers, datasets os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' #gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2) #sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) (x, y), (x_val, y_val) = datasets.mnist.load_data() x = tf.convert_to_tensor(x, dtype=tf.float32) / 255. y = tf.convert_to_tensor(y, dtype=tf.int32) y = tf.one_hot(y, depth=10) print(x.shape, y.shape) train_dataset = tf.data.Dataset.from_tensor_slices((x, y)) train_dataset = train_dataset.batch(200) model = keras.Sequential([ layers.Dense(512, activation='relu'), layers.Dense(256, activation='relu'), layers.Dense(10)]) optimizer = optimizers.SGD(learning_rate=0.001) def train_epoch(epoch): # Step4.loop for step, (x, y) in enumerate(train_dataset): with tf.GradientTape() as tape: # [b, 28, 28] => [b, 784] x = tf.reshape(x, (-1, 28 * 28)) # Step1. compute output # [b, 784] => [b, 10] out = model(x) # Step2. compute loss loss = tf.reduce_sum(tf.square(out - y)) / x.shape[0] # Step3. optimize and update w1, w2, w3, b1, b2, b3 grads = tape.gradient(loss, model.trainable_variables) # w' = w - lr * grad optimizer.apply_gradients(zip(grads, model.trainable_variables)) if step % 100 == 0: print(epoch, step, 'loss:', loss.numpy()) def train(): for epoch in range(30): train_epoch(epoch) if __name__ == '__main__': train() ``` 希望能有人给下建议或解决方法,拜谢!
Tensorflow object detection API 使用VOC数据集出现错误。
环境:Win7+Anaconda+Python3.6+tensorflow 1.12.0 在进行目标检测时,运行train.py,跳转到array____ops.py在903行出错, ``` if ops.is_dense_tensor_like(elem): if dtype is not None and elem.dtype.base_dtype != dtype: raise TypeError("Cannot convert a list containing a tensor of dtype " "%s to %s (Tensor is: %r)" % (elem.dtype, dtype, elem)) converted_elems.append(elem) must_pack = True elif isinstance(elem, (list, tuple)): converted_elem = _autopacking_helper(elem, dtype, str(i)) if ops.is_dense_tensor_like(converted_elem): must_pack = True converted_elems.append(converted_elem) else: converted_elems.append(elem) ``` 错误显示为: TypeError: Cannot convert a list containing a tensor of dtype <dtype: 'int32'> to <dtype: 'float32'> (Tensor is: <tf.Tensor 'Preprocessor/stack_1:0' shape=(1, 3) dtype=int32>) 有没有遇到相同问题的,怎么解决啊,找了好多,都没有遇到靠谱的。
tensorflow重载模型继续训练得到的loss比原模型继续训练得到的loss大,是什么原因??
我使用tensorflow训练了一个模型,在第10个epoch时保存模型,然后在一个新的文件里重载模型继续训练,结果我发现重载的模型在第一个epoch的loss比原模型在epoch=11的loss要大,我感觉既然是重载了原模型,那么重载模型训练的第一个epoch应该是和原模型训练的第11个epoch相等的,一直找不到问题或者自己逻辑的问题,希望大佬能指点迷津。源代码和重载模型的代码如下: ``` 原代码: from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf import os import numpy as np os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' mnist = input_data.read_data_sets("./",one_hot=True) tf.reset_default_graph() ###定义数据和标签 n_inputs = 784 n_classes = 10 X = tf.placeholder(tf.float32,[None,n_inputs],name='X') Y = tf.placeholder(tf.float32,[None,n_classes],name='Y') ###定义网络结构 n_hidden_1 = 256 n_hidden_2 = 256 layer_1 = tf.layers.dense(inputs=X,units=n_hidden_1,activation=tf.nn.relu,kernel_regularizer=tf.contrib.layers.l2_regularizer(0.01)) layer_2 = tf.layers.dense(inputs=layer_1,units=n_hidden_2,activation=tf.nn.relu,kernel_regularizer=tf.contrib.layers.l2_regularizer(0.01)) outputs = tf.layers.dense(inputs=layer_2,units=n_classes,name='outputs') pred = tf.argmax(tf.nn.softmax(outputs,axis=1),axis=1) print(pred.name) err = tf.count_nonzero((pred - tf.argmax(Y,axis=1))) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=outputs,labels=Y),name='cost') print(cost.name) ###定义优化器 learning_rate = 0.001 optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost,name='OP') saver = tf.train.Saver() checkpoint = 'softmax_model/dense_model.cpkt' ###训练 batch_size = 100 training_epochs = 11 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch in range(training_epochs): batch_num = int(mnist.train.num_examples / batch_size) epoch_cost = 0 sumerr = 0 for i in range(batch_num): batch_x,batch_y = mnist.train.next_batch(batch_size) c,e = sess.run([cost,err],feed_dict={X:batch_x,Y:batch_y}) _ = sess.run(optimizer,feed_dict={X:batch_x,Y:batch_y}) epoch_cost += c / batch_num sumerr += e / mnist.train.num_examples if epoch == (training_epochs - 1): print('batch_cost = ',c) if epoch == (training_epochs - 2): saver.save(sess, checkpoint) print('test_error = ',sess.run(cost, feed_dict={X: mnist.test.images, Y: mnist.test.labels})) ``` ``` 重载模型的代码: from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' mnist = input_data.read_data_sets("./",one_hot=True) #one_hot=True指对样本标签进行独热编码 file_path = 'softmax_model/dense_model.cpkt' saver = tf.train.import_meta_graph(file_path + '.meta') graph = tf.get_default_graph() X = graph.get_tensor_by_name('X:0') Y = graph.get_tensor_by_name('Y:0') cost = graph.get_operation_by_name('cost').outputs[0] train_op = graph.get_operation_by_name('OP') training_epoch = 10 learning_rate = 0.001 batch_size = 100 with tf.Session() as sess: saver.restore(sess,file_path) print('test_cost = ',sess.run(cost, feed_dict={X: mnist.test.images, Y: mnist.test.labels})) for epoch in range(training_epoch): batch_num = int(mnist.train.num_examples / batch_size) epoch_cost = 0 for i in range(batch_num): batch_x, batch_y = mnist.train.next_batch(batch_size) c = sess.run(cost, feed_dict={X: batch_x, Y: batch_y}) _ = sess.run(train_op, feed_dict={X: batch_x, Y: batch_y}) epoch_cost += c / batch_num print(epoch_cost) ``` 值得注意的是,我在原模型和重载模型里都计算了测试集的cost,两者的结果是一致的。说明参数载入应该是对的
theano 报错 module 'configparser' has no attribute 'ConfigParser' 用的是Anaconda3 python3.6
>theano 报错 module 'configparser' has no attribute 'ConfigParser' 用的是Win10 Anaconda3 python3.6 ``` from sklearn.datasets import load_boston import theano.tensor as T import numpy as np import matplotlib.pyplot as plt import theano class Layer(object): def __init__(self,inputs,in_size,out_size,activation_function=None): self.W = theano.shared(np.random.normal(0,1,(in_size,out_size))) self.b = theano.shared(np.zeros((out_size,)) + 0.1) self.Wx_plus_b = T.dot(inputs, self.W) + self.b self.activation_function = activation_function if activation_function is None: self.outputs = self.Wx_plus_b else: self.outputs = self.activation_function(self.Wx_plus_b) def minmax_normalization(data): xs_max = np.max(data, axis=0) xs_min = np.min(data, axis=0) xs = (1-0)*(data - xs_min)/(xs_max - xs_min) + 0 return xs np.random.seed(100) x_dataset = load_boston() x_data = x_dataset.data # minmax normalization, rescale the inputs x_data = minmax_normalization(x_data) y_data = x_dataset.target[:,np.newaxis] #cross validation, train test data split x_train, y_train = x_data[:400], y_data[:400] x_test, y_test = x_data[400:], y_data[400:] x = T.dmatrix('x') y = T.dmatrix('y') l1 = Layer(x, 13, 50, T.tanh) l2 = Layer(l1.outputs, 50, 1, None) #compute cost cost = T.mean(T.square(l2.outputs - y)) #cost = T.mean(T.square(l2.outputs - y)) + 0.1*((l1.W**2).sum() + (l2.W**2).sum()) #l2 regulization #cost = T.mean(T.square(l2.outputs - y)) + 0.1*(abs(l1.W).sum() + abs(l2.W).sum()) #l1 regulization gW1, gb1, gW2, gb2 = T.grad(cost, [l1.W,l1.b,l2.W,l2.b]) #gradient descend learning_rate = 0.01 train = theano.function(inputs=[x,y], updates=[(l1.W,l1.W-learning_rate*gW1), (l1.b,l1.b-learning_rate*gb1), (l2.W,l2.W-learning_rate*gW2), (l2.b,l2.b-learning_rate*gb2)]) compute_cost = theano.function(inputs=[x,y], outputs=cost) #record cost train_err_list = [] test_err_list = [] learning_time = [] for i in range(1000): if 1%10 == 0: #record cost train_err_list.append(compute_cost(x_train,y_train)) test_err_list.append(compute_cost(x_test,y_test)) learning_time.append(i) #plot cost history plt.plot(learning_time, train_err_list, 'r-') plt.plot(learning_time, test_err_list,'b--') plt.show() #作者 morvan莫凡 https://morvanzhou.github.io ``` 报错了: Traceback (most recent call last): File "C:/Users/Elena/PycharmProjects/theano/regularization.py", line 1, in <module> from sklearn.datasets import load_boston File "C:\Users\Elena\Anaconda3\lib\site-packages\sklearn\datasets\__init__.py", line 22, in <module> from .twenty_newsgroups import fetch_20newsgroups File "C:\Users\Elena\Anaconda3\lib\site-packages\sklearn\datasets\twenty_newsgroups.py", line 44, in <module> from ..feature_extraction.text import CountVectorizer File "C:\Users\Elena\Anaconda3\lib\site-packages\sklearn\feature_extraction\__init__.py", line 10, in <module> from . import text File "C:\Users\Elena\Anaconda3\lib\site-packages\sklearn\feature_extraction\text.py", line 28, in <module> from ..preprocessing import normalize File "C:\Users\Elena\Anaconda3\lib\site-packages\sklearn\preprocessing\__init__.py", line 6, in <module> from ._function_transformer import FunctionTransformer File "C:\Users\Elena\Anaconda3\lib\site-packages\sklearn\preprocessing\_function_transformer.py", line 5, in <module> from ..utils.testing import assert_allclose_dense_sparse File "C:\Users\Elena\Anaconda3\lib\site-packages\sklearn\utils\testing.py", line 61, in <module> from nose.tools import raises as _nose_raises File "C:\Users\Elena\Anaconda3\lib\site-packages\nose\__init__.py", line 1, in <module> from nose.core import collector, main, run, run_exit, runmodule File "C:\Users\Elena\Anaconda3\lib\site-packages\nose\core.py", line 11, in <module> from nose.config import Config, all_config_files File "C:\Users\Elena\Anaconda3\lib\site-packages\nose\config.py", line 6, in <module> import configparser File "C:\Users\Elena\Anaconda3\Lib\site-packages\theano\configparser.py", line 15, in <module> import theano File "C:\Users\Elena\Anaconda3\lib\site-packages\theano\__init__.py", line 88, in <module> from theano.configdefaults import config File "C:\Users\Elena\Anaconda3\lib\site-packages\theano\configdefaults.py", line 17, in <module> from theano.configparser import (AddConfigVar, BoolParam, ConfigParam, EnumStr, File "C:\Users\Elena\Anaconda3\lib\site-packages\theano\configparser.py", line 77, in <module> theano_cfg = (configparser.ConfigParser if PY3 **AttributeError: module 'configparser' has no attribute 'ConfigParser**' 把theano里的configparser.py文件里的ConfigParser改成了configparser还是不行 换了模块import configparsor也不行。。。![图片说明](https://img-ask.csdn.net/upload/201909/30/1569832318_223436.png)
在Spyder界面中使用tensorflow进行fashion_mnist数据集学习,结果loss为非数,并且准确率一直未变
1.建立了一个3个全连接层的神经网络; 2.代码如下: ``` import matplotlib as mpl import matplotlib.pyplot as plt #%matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time import tensorflow as tf from tensorflow import keras print(tf.__version__) print(sys.version_info) for module in mpl, np, sklearn, tf, keras: print(module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data() x_valid, x_train = x_train_all[:5000], x_train_all[5000:] y_valid, y_train = y_train_all[:5000], y_train_all[5000:] #tf.keras.models.Sequential model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape= [28,28])) model.add(keras.layers.Dense(300, activation="relu")) model.add(keras.layers.Dense(100, activation="relu")) model.add(keras.layers.Dense(10,activation="softmax")) ###sparse为最后输出为index类型,如果为one hot类型,则不需加sparse model.compile(loss = "sparse_categorical_crossentropy",optimizer = "sgd", metrics = ["accuracy"]) #model.layers #model.summary() history = model.fit(x_train, y_train, epochs=10, validation_data=(x_valid,y_valid)) ``` 3.输出结果: ``` runfile('F:/new/new world/deep learning/tensorflow/ex2/tf_keras_classification_model.py', wdir='F:/new/new world/deep learning/tensorflow/ex2') 2.0.0 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0) matplotlib 3.1.1 numpy 1.16.5 sklearn 0.21.3 tensorflow 2.0.0 tensorflow_core.keras 2.2.4-tf Train on 55000 samples, validate on 5000 samples Epoch 1/10 WARNING:tensorflow:Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: 55000/55000 [==============================] - 3s 58us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 2/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 3/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 4/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 5/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 6/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 7/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 8/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 9/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 10/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 ```
optim.compute_gradients计算梯度 ,为什么返回的第一列为None?
1.问题描述 模型通过keras.models.Sequential构建 loss => tf.losses.sparse_softmax_cross_entropy 通过 var_list=tf.trainable_variables() 获取可训练变量 计算梯度值: loss_op = tf.losses.sparse_softmax_cross_entropy(y, y_pred) grads_vars = optim.compute_gradients(loss_op, tf.trainable_variables()) grads_vars返回的第一元素为 None,导致面的程序异常。 为什么grads_vars第一列返回的元素为None? 2.相关代码 ``` import tensorflow as tf import numpy as np import time import keras # 加载数据集 x_dataset=np.random.rand(1000,28,28,1) y_dataset=np.random.randint(0,10,size=(1000,)) act = tf.nn.leaky_relu epoch = 200 batch_size = 5000 n_batch = len(x_dataset) // batch_size # 把 batch 分成多少个 sub batch 来计算 subdivisions = 50 subdivisions_batch_size = int(np.ceil(batch_size / subdivisions)) # 是否使用 sub batch 方法,设置为 False 代表使用默认方法 is_on_subdivisions = True def get_model(is_train=True, reuse=False): with tf.variable_scope('model', reuse=reuse): net = keras.models.Sequential() net.add(keras.layers.Conv2D(128,(3,3),input_shape=(28,28,1),strides=(2,2),padding='same',name='c1')) net.add(keras.layers.GlobalAveragePooling2D()) net.add(keras.layers.Dense(10)) return net x = tf.placeholder(tf.float32, [None, 28, 28, 1]) y = tf.placeholder(tf.int32, [None,]) net = get_model() y_pred=tf.cast(tf.argmax(net.outputs[0],axis=-1),dtype=tf.float32) loss_op = tf.losses.sparse_softmax_cross_entropy(y, y_pred) optim = tf.train.AdamOptimizer(0.01) var_list=tf.trainable_variables() grads_vars = optim.compute_gradients(loss_op, tf.trainable_variables()) #grads_vars返回的第一列为None,为什么? for gv in grads_vars: print(gv) # 删掉没梯度的参数, 倒序删除,减少麻烦 for i in range(len(grads_vars))[::-1]: if grads_vars[i][0] is None: del grads_vars[i] #因为返回的第一列为None,所以所有变量都被删除了,导致后面的异常! print('len(grads_vars):',len(grads_vars)) # 生成梯度缓存:grads_vars第一列为None触发异常 grads_cache = [tf.Variable(np.zeros(t[0].shape.as_list(), np.float32), trainable=False) for t in grads_vars] # 清空梯度缓存op,每一 batch 开始前调用 clear_grads_cache_op = tf.group([gc.assign(tf.zeros_like(gc)) for gc in grads_cache]) # 累积梯度op,累积每个 sub batch 的梯度 accumulate_grad_op = tf.group([gc.assign_add(gv[0]) for gc, gv in zip(grads_cache, grads_vars)]) # 求平均梯度, mean_grad = [gc/tf.to_float(subdivisions) for gc in grads_cache] # 组装梯度列表 new_grads_vars = [(g, gv[1]) for g, gv in zip(mean_grad, grads_vars)] # 应用梯度op,累积完所有 sub batch 的梯度后,应用梯度 apply_grad_op = optim.apply_gradients(new_grads_vars) # 原来的 optim ,跟上面做对照 ori_optim_op = tf.train.AdamOptimizer(0.01).minimize(loss_op, var_list=net.all_params) config = tf.ConfigProto() config.gpu_options.allow_growth = True config.allow_soft_placement = True sess = tf.Session(config=config) sess.run(tf.global_variables_initializer()) for e in range(epoch): loss_sum = 0 for b in progressbar(range(n_batch)): x_batch = x_dataset[b * batch_size: (b + 1) * batch_size] y_batch = y_dataset[b * batch_size: (b + 1) * batch_size] if is_on_subdivisions: # 每一批开始前需要清空梯度缓存 sess.run(clear_grads_cache_op) sub_loss_sum = 0 for s in range(subdivisions): x_sub_batch = x_batch[s * subdivisions_batch_size: (s + 1) * subdivisions_batch_size] y_sub_batch = y_batch[s * subdivisions_batch_size: (s + 1) * subdivisions_batch_size] if len(x_sub_batch) == 0: break feed_dict = {x: x_sub_batch, y: y_sub_batch} _, los = sess.run([accumulate_grad_op, loss_op], feed_dict) sub_loss_sum += los loss_sum += sub_loss_sum / subdivisions # 梯度累积完成,开始应用梯度 sess.run(apply_grad_op) # 本批次结束 else: feed_dict = {x: x_batch, y: y_batch} _, los = sess.run([ori_optim_op, loss_op], feed_dict) loss_sum += los time.sleep(0.2) print('loss', loss_sum / n_batch) ``` 3.报错信息 ``` grads_vars: (None, <tf.Variable 'model/c1/kernel:0' shape=(3, 3, 1, 128) dtype=float32_ref>) (None, <tf.Variable 'model/c1/bias:0' shape=(128,) dtype=float32_ref>) (None, <tf.Variable 'model/dense_1/kernel:0' shape=(128, 10) dtype=float32_ref>) (None, <tf.Variable 'model/dense_1/bias:0' shape=(10,) dtype=float32_ref>) (None, <tf.Variable 'model_1/c1/kernel:0' shape=(3, 3, 1, 128) dtype=float32_ref>) (None, <tf.Variable 'model_1/c1/bias:0' shape=(128,) dtype=float32_ref>) (None, <tf.Variable 'model_1/dense_2/kernel:0' shape=(128, 10) dtype=float32_ref>) (None, <tf.Variable 'model_1/dense_2/bias:0' shape=(10,) dtype=float32_ref>) (None, <tf.Variable 'model_2/c1/kernel:0' shape=(3, 3, 1, 128) dtype=float32_ref>) (None, <tf.Variable 'model_2/c1/bias:0' shape=(128,) dtype=float32_ref>) (None, <tf.Variable 'model_2/dense_3/kernel:0' shape=(128, 10) dtype=float32_ref>) (None, <tf.Variable 'model_2/dense_3/bias:0' shape=(10,) dtype=float32_ref>) len(grads_vars): 0 ``` 4.尝试过的方法方式 5.相关截图
keras下self-attention和Recall, F1-socre值实现问题?
麻烦大神帮忙看一下: (1)为何返回不了Precise, Recall, F1-socre值? (2)为何在CNN前加了self-attention层,训练后的acc反而降低在0.78上下? 【研一小白求详解,万分感谢大神】 ``` import os #导入os模块,用于确认文件是否存在 import numpy as np from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.callbacks import Callback from sklearn.metrics import f1_score, precision_score, recall_score maxlen = 380#句子长截断为100 training_samples = 20000#在 200 个样本上训练 validation_samples = 5000#在 10 000 个样本上验证 max_words = 10000#只考虑数据集中前 10 000 个最常见的单词 def dataProcess(): imdb_dir = 'data/aclImdb'#基本路径,经常要打开这个 #处理训练集 train_dir = os.path.join(imdb_dir, 'train')#添加子路径 train_labels = [] train_texts = [] for label_type in ['neg', 'pos']: dir_name = os.path.join(train_dir, label_type) for fname in os.listdir(dir_name):#获取目录下所有文件名字 if fname[-4:] == '.txt': f = open(os.path.join(dir_name, fname),'r',encoding='utf8') train_texts.append(f.read()) f.close() if label_type == 'neg': train_labels.append(0) else:train_labels.append(1) #处理测试集 test_dir = os.path.join(imdb_dir, 'test') test_labels = [] test_texts = [] for label_type in ['neg', 'pos']: dir_name = os.path.join(test_dir, label_type) for fname in sorted(os.listdir(dir_name)): if fname[-4:] == '.txt': f = open(os.path.join(dir_name, fname),'r',encoding='utf8') test_texts.append(f.read()) f.close() if label_type == 'neg': test_labels.append(0) else: test_labels.append(1) #对数据进行分词和划分训练集和数据集 tokenizer = Tokenizer(num_words=max_words) tokenizer.fit_on_texts(train_texts)#构建单词索引结构 sequences = tokenizer.texts_to_sequences(train_texts)#整数索引的向量化模型 word_index = tokenizer.word_index#索引字典 print('Found %s unique tokens.' % len(word_index)) data = pad_sequences(sequences, maxlen=maxlen) train_labels = np.asarray(train_labels)#把列表转化为数组 print('Shape of data tensor:', data.shape) print('Shape of label tensor:', train_labels.shape) indices = np.arange(data.shape[0])#评论顺序0,1,2,3 np.random.shuffle(indices)#把评论顺序打乱3,1,2,0 data = data[indices] train_labels = train_labels[indices] x_train = data[:training_samples] y_train = train_labels[:training_samples] x_val = data[training_samples: training_samples + validation_samples] y_val = train_labels[training_samples: training_samples + validation_samples] #同样需要将测试集向量化 test_sequences = tokenizer.texts_to_sequences(test_texts) x_test = pad_sequences(test_sequences, maxlen=maxlen) y_test = np.asarray(test_labels) return x_train,y_train,x_val,y_val,x_test,y_test,word_index embedding_dim = 100#特征数设为100 #"""将预训练的glove词嵌入文件,构建成可以加载到embedding层中的嵌入矩阵""" def load_glove(word_index):#导入glove的词向量 embedding_file='data/glove.6B' embeddings_index={}#定义字典 f = open(os.path.join(embedding_file, 'glove.6B.100d.txt'),'r',encoding='utf8') for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs f.close() # """转化为矩阵:构建可以加载到embedding层中的嵌入矩阵,形为(max_words(单词数), embedding_dim(向量维数)) """ embedding_matrix = np.zeros((max_words, embedding_dim)) for word, i in word_index.items():#字典里面的单词和索引 if i >= max_words:continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector return embedding_matrix if __name__ == '__main__': x_train, y_train, x_val, y_val,x_test,y_test, word_index = dataProcess() embedding_matrix=load_glove(word_index) #可以把得到的嵌入矩阵保存起来,方便后面fine-tune""" # #保存 from keras.models import Sequential from keras.layers.core import Dense,Dropout,Activation,Flatten from keras.layers.recurrent import LSTM from keras.layers import Embedding from keras.layers import Bidirectional from keras.layers import Conv1D, MaxPooling1D import keras from keras_self_attention import SeqSelfAttention model = Sequential() model.add(Embedding(max_words, embedding_dim, input_length=maxlen)) model.add(SeqSelfAttention(attention_activation='sigmod')) model.add(Conv1D(filters = 64, kernel_size = 5, padding = 'same', activation = 'relu')) model.add(MaxPooling1D(pool_size = 4)) model.add(Dropout(0.25)) model.add(Bidirectional(LSTM(64,activation='tanh',dropout=0.2,recurrent_dropout=0.2))) model.add(Dense(256, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(1, activation='sigmoid')) model.summary() model.layers[0].set_weights([embedding_matrix]) model.layers[0].trainable = False model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) class Metrics(Callback): def on_train_begin(self, logs={}): self.val_f1s = [] self.val_recalls = [] self.val_precisions = [] def on_epoch_end(self, epoch, logs={}): val_predict = (np.asarray(self.model.predict(self.validation_data[0]))).round() val_targ = self.validation_data[1] _val_f1 = f1_score(val_targ, val_predict) _val_recall = recall_score(val_targ, val_predict) _val_precision = precision_score(val_targ, val_predict) self.val_f1s.append(_val_f1) self.val_recalls.append(_val_recall) self.val_precisions.append(_val_precision) return metrics = Metrics() history = model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val), callbacks=[metrics]) model.save_weights('pre_trained_glove_model.h5')#保存结果 ```
Ubuntu16.04系统plot出现permission denied
``` import numpy as np from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.optimizers import SGD from keras.utils import np_utils from keras.utils.visualize_util import plot def run(): model = Sequential() model.add(Dense(4, input_dim=2, init='uniform')) model.add(Activation('relu')) model.add(Dense(2, init='uniform')) model.add(Activation('sigmoid')) sgd = SGD(lr=0.05, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy']) plot(model, to_file='model.png') if __name__ == '__main__': run() ``` 错误提示: Traceback (most recent call last): File "example2.py", line 21, in _**<module>** run() File "example2.py", line 18, in run plot(model, to_file='model.png') File "/home/c249/anaconda2/lib/python2.7/site-packages/keras/utils/visualize_util.py", line 64, in plot dot.write_png(to_file) File "/home/c249/anaconda2/lib/python2.7/site-packages/pydot.py", line 1811, in <lambda> lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File "/home/c249/anaconda2/lib/python2.7/site-packages/pydot.py", line 1913, in write dot_fd.write(self.create(prog, format)) File "/home/c249/anaconda2/lib/python2.7/site-packages/pydot.py", line 1992, in create stderr=subprocess.PIPE, stdout=subprocess.PIPE) File "/home/c249/anaconda2/lib/python2.7/subprocess.py", line 390, in __init__ errread, errwrite) File "/home/c249/anaconda2/lib/python2.7/subprocess.py", line 1024, in _execute_child raise child_exception OSError: [Errno 13] Permission denied
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
大学四年自学走来,这些私藏的实用工具/学习网站我贡献出来了
大学四年,看课本是不可能一直看课本的了,对于学习,特别是自学,善于搜索网上的一些资源来辅助,还是非常有必要的,下面我就把这几年私藏的各种资源,网站贡献出来给你们。主要有:电子书搜索、实用工具、在线视频学习网站、非视频学习网站、软件下载、面试/求职必备网站。 注意:文中提到的所有资源,文末我都给你整理好了,你们只管拿去,如果觉得不错,转发、分享就是最大的支持了。 一、PDF搜索网站推荐 对于大部
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
比特币原理详解
一、什么是比特币 比特币是一种电子货币,是一种基于密码学的货币,在2008年11月1日由中本聪发表比特币白皮书,文中提出了一种去中心化的电子记账系统,我们平时的电子现金是银行来记账,因为银行的背后是国家信用。去中心化电子记账系统是参与者共同记账。比特币可以防止主权危机、信用风险。其好处不多做赘述,这一层面介绍的文章很多,本文主要从更深层的技术原理角度进行介绍。 二、问题引入  假设现有4个人
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
写在前面: 我是 扬帆向海,这个昵称来源于我的名字以及女朋友的名字。我热爱技术、热爱开源、热爱编程。技术是开源的、知识是共享的。 这博客是对自己学习的一点点总结及记录,如果您对 Java、算法 感兴趣,可以关注我的动态,我们一起学习。 用知识改变命运,让我们的家人过上更好的生活。 目录一、杨辉三角的介绍二、杨辉三角的算法思想三、代码实现1.第一种写法2.第二种写法 一、杨辉三角的介绍 百度
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
SQL-小白最佳入门sql查询一
一 说明 如果是初学者,建议去网上寻找安装Mysql的文章安装,以及使用navicat连接数据库,以后的示例基本是使用mysql数据库管理系统; 二 准备前提 需要建立一张学生表,列分别是id,名称,年龄,学生信息;本示例中文章篇幅原因SQL注释略; 建表语句: CREATE TABLE `student` ( `id` int(11) NOT NULL AUTO_INCREMENT, `
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // dosho
【图解经典算法题】如何用一行代码解决约瑟夫环问题
约瑟夫环问题算是很经典的题了,估计大家都听说过,然后我就在一次笔试中遇到了,下面我就用 3 种方法来详细讲解一下这道题,最后一种方法学了之后保证让你可以让你装逼。 问题描述:编号为 1-N 的 N 个士兵围坐在一起形成一个圆圈,从编号为 1 的士兵开始依次报数(1,2,3…这样依次报),数到 m 的 士兵会被杀死出列,之后的士兵再从 1 开始报数。直到最后剩下一士兵,求这个士兵的编号。 1、方
致 Python 初学者
文章目录1. 前言2. 明确学习目标,不急于求成,不好高骛远3. 在开始学习 Python 之前,你需要做一些准备2.1 Python 的各种发行版2.2 安装 Python2.3 选择一款趁手的开发工具3. 习惯使用IDLE,这是学习python最好的方式4. 严格遵从编码规范5. 代码的运行、调试5. 模块管理5.1 同时安装了py2/py35.2 使用Anaconda,或者通过IDE来安装模
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,
程序员:我终于知道post和get的区别
IT界知名的程序员曾说:对于那些月薪三万以下,自称IT工程师的码农们,其实我们从来没有把他们归为我们IT工程师的队伍。他们虽然总是以IT工程师自居,但只是他们一厢情愿罢了。 此话一出,不知激起了多少(码农)程序员的愤怒,却又无可奈何,于是码农问程序员。 码农:你知道get和post请求到底有什么区别? 程序员:你看这篇就知道了。 码农:你月薪三万了? 程序员:嗯。 码农:你是怎么做到的? 程序员:
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU
加快推动区块链技术和产业创新发展,2019可信区块链峰会在京召开
      11月8日,由中国信息通信研究院、中国通信标准化协会、中国互联网协会、可信区块链推进计划联合主办,科技行者协办的2019可信区块链峰会将在北京悠唐皇冠假日酒店开幕。   区块链技术被认为是继蒸汽机、电力、互联网之后,下一代颠覆性的核心技术。如果说蒸汽机释放了人类的生产力,电力解决了人类基本的生活需求,互联网彻底改变了信息传递的方式,区块链作为构造信任的技术有重要的价值。   1
程序员把地府后台管理系统做出来了,还有3.0版本!12月7号最新消息:已在开发中有github地址
第一幕:缘起 听说阎王爷要做个生死簿后台管理系统,我们派去了一个程序员…… 996程序员做的梦: 第一场:团队招募 为了应对地府管理危机,阎王打算找“人”开发一套地府后台管理系统,于是就在地府总经办群中发了项目需求。 话说还是中国电信的信号好,地府都是满格,哈哈!!! 经常会有外行朋友问:看某网站做的不错,功能也简单,你帮忙做一下? 而这次,面对这样的需求,这个程序员
网易云6亿用户音乐推荐算法
网易云音乐是音乐爱好者的集聚地,云音乐推荐系统致力于通过 AI 算法的落地,实现用户千人千面的个性化推荐,为用户带来不一样的听歌体验。 本次分享重点介绍 AI 算法在音乐推荐中的应用实践,以及在算法落地过程中遇到的挑战和解决方案。 将从如下两个部分展开: AI 算法在音乐推荐中的应用 音乐场景下的 AI 思考 从 2013 年 4 月正式上线至今,网易云音乐平台持续提供着:乐屏社区、UGC
【技巧总结】位运算装逼指南
位算法的效率有多快我就不说,不信你可以去用 10 亿个数据模拟一下,今天给大家讲一讲位运算的一些经典例子。不过,最重要的不是看懂了这些例子就好,而是要在以后多去运用位运算这些技巧,当然,采用位运算,也是可以装逼的,不信,你往下看。我会从最简单的讲起,一道比一道难度递增,不过居然是讲技巧,那么也不会太难,相信你分分钟看懂。 判断奇偶数 判断一个数是基于还是偶数,相信很多人都做过,一般的做法的代码如下
日均350000亿接入量,腾讯TubeMQ性能超过Kafka
整理 | 夕颜出品 | AI科技大本营(ID:rgznai100) 【导读】近日,腾讯开源动作不断,相继开源了分布式消息中间件TubeMQ,基于最主流的 OpenJDK8开发的
8年经验面试官详解 Java 面试秘诀
    作者 | 胡书敏 责编 | 刘静 出品 | CSDN(ID:CSDNnews) 本人目前在一家知名外企担任架构师,而且最近八年来,在多家外企和互联网公司担任Java技术面试官,前后累计面试了有两三百位候选人。在本文里,就将结合本人的面试经验,针对Java初学者、Java初级开发和Java开发,给出若干准备简历和准备面试的建议。   Java程序员准备和投递简历的实
面试官如何考察你的思维方式?
1.两种思维方式在求职面试中,经常会考察这种问题:北京有多少量特斯拉汽车? 某胡同口的煎饼摊一年能卖出多少个煎饼? 深圳有多少个产品经理? 一辆公交车里能装下多少个乒乓球? 一
相关热词 c# 引用mysql c#动态加载非托管dll c# 两个表数据同步 c# 返回浮点json c# imap 链接状态 c# 漂亮字 c# 上取整 除法 c#substring c#中延时关闭 c#线段拖拉
立即提问

相似问题

0
Keras, Tensorflow, ValueError
1
keras模型输出层希望输出的shape是(None,1)但我写的模型输出一个三维的shape(None,10,1)
1
Tensorflow object detection API 使用VOC数据集出现错误。
1
tensorflow重载模型继续训练得到的loss比原模型继续训练得到的loss大,是什么原因??
1
LSTM模型增加了实验的输入样本数量,损失函数的变化如两图所示
2
VS2017和openvc2.4.0的调试问题 无法查找或打开 PDB 文件
0
用神经网络训练模型,报错字符串不能转换为浮点,请问怎么解决?
0
keras实现人脸识别,训练失败……请教大神指点迷津!!!
1
TensorFlow的Keras如何使用Dataset作为数据输入?
1
SQL查询出错:java.lang.Exception: ERROR:ORA-03001: 未实施的功能
3
利用ajax动态的提取mysql中的数据,并且在前端页面中展示出来
2
求mnist多数字识别,修改完成我的代码
1
Keras做序列到序列任务,出现这样的低级错误该怎么解决?
4
怎么用sql语句对学生成绩进行排名?
0
keras下self-attention和Recall, F1-socre值实现问题?
2
keras pretrained模型应用于新的数据应当如何设置输入格式
1
optim.compute_gradients计算梯度 ,为什么返回的第一列为None?
1
关于使用深度强化学习Actor-Critic算法玩gym库中CartPole游戏不收敛的问题,高分悬赏。
1
tensorflow.python.framework.errors_impl.InvalidArgumentError
1
为什么我在predict_classes(x)中的x用了很多格式但总是报错?