使用keras搭建黑体汉字单个字符识别网络val_acc=0.0002 40C

这是我读入训练数据的过程,数据集是根据txt文件生成的单个汉字的图像(shape为64*64)788种字符(包括数字和X字符),每张图片的开头命名为在txt字典中的位置,作为标签,

    def read_train_image(self, name):
        img = Image.open(name).convert('RGB')
        return np.array(img)

    def train(self):
        train_img_list = []
        train_label_list = []
        for file in os.listdir('train'):
            files_img_in_array = self.read_train_image(name='train/'+ file)
            train_img_list.append(files_img_in_array)  # Image list add up
            train_label_list.append(int(file.split('_')[0]))  # lable list addup

        train_img_list = np.array(train_img_list)
        train_label_list = np.array(train_label_list)

        train_label_list = np_utils.to_categorical(train_label_list,
                                                   self.count)  

        train_img_list = train_img_list.astype('float32')
        train_img_list /= 255

训练下来,虽然train_acc达到0.99,但是验证accuracy一直都等于0.
下面是网络结构:

  model = Sequential()

        #创建第一个卷积层。
        model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=(64,64,3),kernel_regularizer=l2(0.0001)))
        model.add(BatchNormalization(axis=3))
        model.add(Activation('relu'))

        model.add(MaxPooling2D(pool_size=(2, 2)))

        #创建第二个卷积层。
        model.add(Convolution2D(64, 3, 3, border_mode='valid',kernel_regularizer=l2(0.0001)))
        model.add(BatchNormalization())
        model.add(Activation('relu'))

        model.add(MaxPooling2D(pool_size=(2, 2)))
        #创建第三个卷积层。
        model.add(Convolution2D(128, 3, 3, border_mode='valid',kernel_regularizer=l2(0.0001)))
        model.add(BatchNormalization())
        model.add(Activation('relu'))

        model.add(MaxPooling2D(pool_size=(2, 2)))
        # 创建全连接层。
        model.add(Flatten())
        model.add(Dense(128, init= 'he_normal'))
        model.add(BatchNormalization())
        model.add(Activation('relu'))
        #创建输出层,使用 Softmax函数输出属于各个字符的概率值。
        model.add(Dense(output_dim=self.count, init= 'he_normal'))
        model.add(Activation('softmax'))
        #设置神经网络中的损失函数和优化算法。
        model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
        #开始训练,并设置批尺寸和训练的步数。
        model.fit(
            train_img_list,
            train_label_list,
            epochs=500,
            batch_size=128,
            validation_split=0.2,
            verbose=1,
            shuffle= False,
        )

大概结构是这样,十几轮后训练集acc达到了0.99.验证集val_acc为0.网上说这种情况大概是过拟合了,希望高手指点一下。

1个回答

你的数据量有多大,几千个汉字,就是几千的分类,你的样本太少就会学不动
你可以先从英文字母或者数字识别试起

qq_41654741
qq_41654741 5788个字,每个16个图片,这样不够吗。。。
10 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
fashion_mnist识别准确率问题
fashion_mnist识别准确率一般为多少呢?我看好多人都是92%左右,但是我用一个网络达到了94%,想问问做过的小伙伴到底是多少? ``` #这是我的结果示意 x_shape: (60000, 28, 28) y_shape: (60000,) epoches: 0 val_acc: 0.4991 train_acc 0.50481665 epoches: 1 val_acc: 0.6765 train_acc 0.66735 epoches: 2 val_acc: 0.755 train_acc 0.7474 epoches: 3 val_acc: 0.7846 train_acc 0.77915 epoches: 4 val_acc: 0.798 train_acc 0.7936 epoches: 5 val_acc: 0.8082 train_acc 0.80365 epoches: 6 val_acc: 0.8146 train_acc 0.8107 epoches: 7 val_acc: 0.8872 train_acc 0.8872333 epoches: 8 val_acc: 0.896 train_acc 0.89348334 epoches: 9 val_acc: 0.9007 train_acc 0.8986 epoches: 10 val_acc: 0.9055 train_acc 0.90243334 epoches: 11 val_acc: 0.909 train_acc 0.9058833 epoches: 12 val_acc: 0.9112 train_acc 0.90868336 epoches: 13 val_acc: 0.9126 train_acc 0.91108334 epoches: 14 val_acc: 0.9151 train_acc 0.9139 epoches: 15 val_acc: 0.9172 train_acc 0.91595 epoches: 16 val_acc: 0.9191 train_acc 0.91798335 epoches: 17 val_acc: 0.9204 train_acc 0.91975 epoches: 18 val_acc: 0.9217 train_acc 0.9220333 epoches: 19 val_acc: 0.9252 train_acc 0.9234667 epoches: 20 val_acc: 0.9259 train_acc 0.92515 epoches: 21 val_acc: 0.9281 train_acc 0.9266667 epoches: 22 val_acc: 0.9289 train_acc 0.92826664 epoches: 23 val_acc: 0.9301 train_acc 0.93005 epoches: 24 val_acc: 0.9315 train_acc 0.93126667 epoches: 25 val_acc: 0.9322 train_acc 0.9328 epoches: 26 val_acc: 0.9331 train_acc 0.9339667 epoches: 27 val_acc: 0.9342 train_acc 0.93523335 epoches: 28 val_acc: 0.9353 train_acc 0.93665 epoches: 29 val_acc: 0.9365 train_acc 0.9379333 epoches: 30 val_acc: 0.9369 train_acc 0.93885 epoches: 31 val_acc: 0.9387 train_acc 0.9399 epoches: 32 val_acc: 0.9395 train_acc 0.9409 epoches: 33 val_acc: 0.94 train_acc 0.9417667 epoches: 34 val_acc: 0.9403 train_acc 0.94271666 epoches: 35 val_acc: 0.9409 train_acc 0.9435167 epoches: 36 val_acc: 0.9418 train_acc 0.94443333 epoches: 37 val_acc: 0.942 train_acc 0.94515 epoches: 38 val_acc: 0.9432 train_acc 0.9460667 epoches: 39 val_acc: 0.9443 train_acc 0.9468833 epoches: 40 val_acc: 0.9445 train_acc 0.94741666 epoches: 41 val_acc: 0.9462 train_acc 0.9482 epoches: 42 val_acc: 0.947 train_acc 0.94893336 epoches: 43 val_acc: 0.9472 train_acc 0.94946665 epoches: 44 val_acc: 0.948 train_acc 0.95028335 epoches: 45 val_acc: 0.9486 train_acc 0.95095 epoches: 46 val_acc: 0.9488 train_acc 0.9515833 epoches: 47 val_acc: 0.9492 train_acc 0.95213336 epoches: 48 val_acc: 0.9495 train_acc 0.9529833 epoches: 49 val_acc: 0.9498 train_acc 0.9537 val_acc: 0.9498 ``` ``` import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt def to_onehot(y,num): lables = np.zeros([num,len(y)]) for i in range(len(y)): lables[y[i],i] = 1 return lables.T # 预处理数据 mnist = keras.datasets.fashion_mnist (train_images,train_lables),(test_images,test_lables) = mnist.load_data() print('x_shape:',train_images.shape) #(60000) print('y_shape:',train_lables.shape) X_train = train_images.reshape((-1,train_images.shape[1]*train_images.shape[1])) / 255.0 #X_train = tf.reshape(X_train,[-1,X_train.shape[1]*X_train.shape[2]]) Y_train = to_onehot(train_lables,10) X_test = test_images.reshape((-1,test_images.shape[1]*test_images.shape[1])) / 255.0 Y_test = to_onehot(test_lables,10) #双隐层的神经网络 input_nodes = 784 output_nodes = 10 layer1_nodes = 100 layer2_nodes = 50 batch_size = 100 learning_rate_base = 0.8 learning_rate_decay = 0.99 regularization_rate = 0.0000001 epochs = 50 mad = 0.99 learning_rate = 0.005 # def inference(input_tensor,avg_class,w1,b1,w2,b2): # if avg_class == None: # layer1 = tf.nn.relu(tf.matmul(input_tensor,w1)+b1) # return tf.nn.softmax(tf.matmul(layer1,w2) + b2) # else: # layer1 = tf.nn.relu(tf.matmul(input_tensor,avg_class.average(w1)) + avg_class.average(b1)) # return tf.matual(layer1,avg_class.average(w2)) + avg_class.average(b2) def train(mnist): X = tf.placeholder(tf.float32,[None,input_nodes],name = "input_x") Y = tf.placeholder(tf.float32,[None,output_nodes],name = "y_true") w1 = tf.Variable(tf.truncated_normal([input_nodes,layer1_nodes],stddev=0.1)) b1 = tf.Variable(tf.constant(0.1,shape=[layer1_nodes])) w2 = tf.Variable(tf.truncated_normal([layer1_nodes,layer2_nodes],stddev=0.1)) b2 = tf.Variable(tf.constant(0.1,shape=[layer2_nodes])) w3 = tf.Variable(tf.truncated_normal([layer2_nodes,output_nodes],stddev=0.1)) b3 = tf.Variable(tf.constant(0.1,shape=[output_nodes])) layer1 = tf.nn.relu(tf.matmul(X,w1)+b1) A2 = tf.nn.relu(tf.matmul(layer1,w2)+b2) A3 = tf.nn.relu(tf.matmul(A2,w3)+b3) y_hat = tf.nn.softmax(A3) # y_hat = inference(X,None,w1,b1,w2,b2) # global_step = tf.Variable(0,trainable=False) # variable_averages = tf.train.ExponentialMovingAverage(mad,global_step) # varible_average_op = variable_averages.apply(tf.trainable_variables()) #y = inference(x,variable_averages,w1,b1,w2,b2) cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=A3,labels=Y)) regularizer = tf.contrib.layers.l2_regularizer(regularization_rate) regularization = regularizer(w1) + regularizer(w2) +regularizer(w3) loss = cross_entropy + regularization * regularization_rate # learning_rate = tf.train.exponential_decay(learning_rate_base,global_step,epchos,learning_rate_decay) # train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step) train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # with tf.control_dependencies([train_step,varible_average_op]): # train_op = tf.no_op(name="train") correct_prediction = tf.equal(tf.argmax(y_hat,1),tf.argmax(Y,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) total_loss = [] val_acc = [] total_train_acc = [] x_Xsis = [] with tf.Session() as sess: tf.global_variables_initializer().run() for i in range(epochs): # x,y = next_batch(X_train,Y_train,batch_size) batchs = int(X_train.shape[0] / batch_size + 1) loss_e = 0. for j in range(batchs): batch_x = X_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:] batch_y = Y_train[j*batch_size:min(X_train.shape[0],j*(batch_size+1)),:] sess.run(train_step,feed_dict={X:batch_x,Y:batch_y}) loss_e += sess.run(loss,feed_dict={X:batch_x,Y:batch_y}) # train_step.run(feed_dict={X:x,Y:y}) validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test}) train_acc = sess.run(accuracy,feed_dict={X:X_train,Y:Y_train}) print("epoches: ",i,"val_acc: ",validate_acc,"train_acc",train_acc) total_loss.append(loss_e / batch_size) val_acc.append(validate_acc) total_train_acc.append(train_acc) x_Xsis.append(i) validate_acc = sess.run(accuracy,feed_dict={X:X_test,Y:Y_test}) print("val_acc: ",validate_acc) return (x_Xsis,total_loss,total_train_acc,val_acc) result = train((X_train,Y_train,X_test,Y_test)) def plot_acc(total_train_acc,val_acc,x): plt.figure() plt.plot(x,total_train_acc,'--',color = "red",label="train_acc") plt.plot(x,val_acc,color="green",label="val_acc") plt.xlabel("Epoches") plt.ylabel("acc") plt.legend() plt.show() ```
keras model 训练 train_loss,train_acc再变,但是val_loss,val_test却一直不变,是哪里有问题?
Epoch 1/15 3112/3112 [==============================] - 73s 237ms/step - loss: 8.1257 - acc: 0.4900 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 2/15 3112/3112 [==============================] - 71s 231ms/step - loss: 8.1730 - acc: 0.4929 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 3/15 3112/3112 [==============================] - 72s 232ms/step - loss: 8.1730 - acc: 0.4929 - val_loss: 8.1763 - val_acc: 0.4427 Epoch 4/15 3112/3112 [==============================] - 71s 229ms/step - loss: 7.0495 - acc: 0.5617 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 5/15 3112/3112 [==============================] - 71s 230ms/step - loss: 5.5504 - acc: 0.6549 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 6/15 3112/3112 [==============================] - 71s 230ms/step - loss: 4.9359 - acc: 0.6931 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 7/15 3112/3112 [==============================] - 71s 230ms/step - loss: 4.8969 - acc: 0.6957 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 8/15 3112/3112 [==============================] - 72s 234ms/step - loss: 4.9446 - acc: 0.6925 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 9/15 3112/3112 [==============================] - 71s 231ms/step - loss: 4.5114 - acc: 0.7201 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 10/15 3112/3112 [==============================] - 73s 237ms/step - loss: 4.7944 - acc: 0.7021 - val_loss: 8.1763 - val_acc: 0.4927 Epoch 11/15 3112/3112 [==============================] - 74s 240ms/step - loss: 4.6789 - acc: 0.7095 - val_loss: 8.1763 - val_acc: 0.4927
在Spyder界面中使用tensorflow进行fashion_mnist数据集学习,结果loss为非数,并且准确率一直未变
1.建立了一个3个全连接层的神经网络; 2.代码如下: ``` import matplotlib as mpl import matplotlib.pyplot as plt #%matplotlib inline import numpy as np import sklearn import pandas as pd import os import sys import time import tensorflow as tf from tensorflow import keras print(tf.__version__) print(sys.version_info) for module in mpl, np, sklearn, tf, keras: print(module.__name__,module.__version__) fashion_mnist = keras.datasets.fashion_mnist (x_train_all, y_train_all), (x_test, y_test) = fashion_mnist.load_data() x_valid, x_train = x_train_all[:5000], x_train_all[5000:] y_valid, y_train = y_train_all[:5000], y_train_all[5000:] #tf.keras.models.Sequential model = keras.models.Sequential() model.add(keras.layers.Flatten(input_shape= [28,28])) model.add(keras.layers.Dense(300, activation="relu")) model.add(keras.layers.Dense(100, activation="relu")) model.add(keras.layers.Dense(10,activation="softmax")) ###sparse为最后输出为index类型,如果为one hot类型,则不需加sparse model.compile(loss = "sparse_categorical_crossentropy",optimizer = "sgd", metrics = ["accuracy"]) #model.layers #model.summary() history = model.fit(x_train, y_train, epochs=10, validation_data=(x_valid,y_valid)) ``` 3.输出结果: ``` runfile('F:/new/new world/deep learning/tensorflow/ex2/tf_keras_classification_model.py', wdir='F:/new/new world/deep learning/tensorflow/ex2') 2.0.0 sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0) matplotlib 3.1.1 numpy 1.16.5 sklearn 0.21.3 tensorflow 2.0.0 tensorflow_core.keras 2.2.4-tf Train on 55000 samples, validate on 5000 samples Epoch 1/10 WARNING:tensorflow:Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x0000025EAB633798> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: 55000/55000 [==============================] - 3s 58us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 2/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 3/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 4/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 5/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 6/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 7/10 55000/55000 [==============================] - 3s 47us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 8/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 9/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 Epoch 10/10 55000/55000 [==============================] - 3s 48us/sample - loss: nan - accuracy: 0.1008 - val_loss: nan - val_accuracy: 0.0914 ```
使用keras进行分类问题时,验证集loss,accuracy 显示0.0000e+00,但是最后画图像时能显示出验证曲线
data_train, data_test, label_train, label_test = train_test_split(data_all, label_all, test_size= 0.2, random_state = 1) data_train, data_val, label_train, label_val = train_test_split(data_train,label_train, test_size = 0.25) data_train = np.asarray(data_train, np.float32) data_test = np.asarray(data_test, np.float32) data_val = np.asarray(data_val, np.float32) label_train = np.asarray(label_train, np.int32) label_test = np.asarray(label_test, np.int32) label_val = np.asarray(label_val, np.int32) training = model.fit_generator(datagen.flow(data_train, label_train_binary, batch_size=200,shuffle=True), validation_data=(data_val,label_val_binary), samples_per_epoch=len(data_train)*8, nb_epoch=30, verbose=1) def plot_history(history): plt.plot(training.history['acc']) plt.plot(training.history['val_acc']) plt.title('model accuracy') plt.xlabel('epoch') plt.ylabel('accuracy') plt.legend(['acc', 'val_acc'], loc='lower right') plt.show() plt.plot(training.history['loss']) plt.plot(training.history['val_loss']) plt.title('model loss') plt.xlabel('epoch') plt.ylabel('loss') plt.legend(['loss', 'val_loss'], loc='lower right') plt.show() plot_history(training) ![图片说明](https://img-ask.csdn.net/upload/201812/10/1544423669_112599.jpg)![图片说明](https://img-ask.csdn.net/upload/201812/10/1544423681_598605.jpg)
Tensorflow测试训练styleGAN时报错 No OpKernel was registered to support Op 'NcclAllReduce' with these attrs.
在测试官方StyleGAN。 运行官方与训练模型pretrained_example.py generate_figures.py 没有问题。GPU工作正常。 运行train.py时报错 尝试只用单个GPU训练时没有报错。 NcclAllReduce应该跟多GPU通信有关,不太了解。 InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] 经过多番google 尝试过 重启 conda install keras-gpu 重新安装tensorflow-gpu==1.10.0(跟官方版本保持一致) ``` …… Building TensorFlow graph... Setting up snapshot image grid... Setting up run dir... Training... Traceback (most recent call last): File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1278, in _do_call return fn(*args) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1263, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "train.py", line 191, in <module> main() File "train.py", line 186, in main dnnlib.submit_run(**kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 290, in submit_run run_wrapper(submit_config) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 242, in run_wrapper util.call_func_by_name(func_name=submit_config.run_func_name, submit_config=submit_config, **submit_config.run_func_kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\util.py", line 257, in call_func_by_name return func_obj(*args, **kwargs) File "E:\MachineLearning\stylegan-master\training\training_loop.py", line 230, in training_loop tflib.run([D_train_op, Gs_update_op], {lod_in: sched.lod, lrate_in: sched.D_lrate, minibatch_in: sched.minibatch}) File "E:\MachineLearning\stylegan-master\dnnlib\tflib\tfutil.py", line 26, in run return tf.get_default_session().run(*args, **kwargs) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 877, in run run_metadata_ptr) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1100, in _run feed_dict_tensor, options, run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1272, in _do_run run_metadata) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\client\session.py", line 1291, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] Caused by op 'TrainD/SumAcrossGPUs/NcclAllReduce', defined at: File "train.py", line 191, in <module> main() File "train.py", line 186, in main dnnlib.submit_run(**kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 290, in submit_run run_wrapper(submit_config) File "E:\MachineLearning\stylegan-master\dnnlib\submission\submit.py", line 242, in run_wrapper util.call_func_by_name(func_name=submit_config.run_func_name, submit_config=submit_config, **submit_config.run_func_kwargs) File "E:\MachineLearning\stylegan-master\dnnlib\util.py", line 257, in call_func_by_name return func_obj(*args, **kwargs) File "E:\MachineLearning\stylegan-master\training\training_loop.py", line 185, in training_loop D_train_op = D_opt.apply_updates() File "E:\MachineLearning\stylegan-master\dnnlib\tflib\optimizer.py", line 135, in apply_updates g = nccl_ops.all_sum(g) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\python\ops\nccl_ops.py", line 49, in all_sum return _apply_all_reduce('sum', tensors) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\python\ops\nccl_ops.py", line 230, in _apply_all_reduce shared_name=shared_name)) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\contrib\nccl\ops\gen_nccl_ops.py", line 59, in nccl_all_reduce num_devices=num_devices, shared_name=shared_name, name=name) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\util\deprecation.py", line 454, in new_func return func(*args, **kwargs) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\ops.py", line 3156, in create_op op_def=op_def) File "d:\Users\admin\Anaconda3\envs\tfenv\lib\site-packages\tensorflow\python\framework\ops.py", line 1718, in __init__ self._traceback = tf_stack.extract_stack() InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'NcclAllReduce' with these attrs. Registered devices: [CPU,GPU], Registered kernels: <no registered kernels> [[Node: TrainD/SumAcrossGPUs/NcclAllReduce = NcclAllReduce[T=DT_FLOAT, num_devices=2, reduction="sum", shared_name="c112", _device="/device:GPU:0"](GPU0/TrainD_grad/gradients/AddN_160)]] ``` ``` #conda list: # Name Version Build Channel _tflow_select 2.1.0 gpu absl-py 0.8.1 pypi_0 pypi alabaster 0.7.12 py36_0 asn1crypto 1.2.0 py36_0 astor 0.8.0 pypi_0 pypi astroid 2.3.2 py36_0 attrs 19.3.0 py_0 babel 2.7.0 py_0 backcall 0.1.0 py36_0 blas 1.0 mkl bleach 3.1.0 py36_0 ca-certificates 2019.10.16 0 certifi 2019.9.11 py36_0 cffi 1.13.1 py36h7a1dbc1_0 chardet 3.0.4 py36_1003 cloudpickle 1.2.2 py_0 colorama 0.4.1 py36_0 cryptography 2.8 py36h7a1dbc1_0 cudatoolkit 9.0 1 cudnn 7.6.4 cuda9.0_0 decorator 4.4.1 py_0 defusedxml 0.6.0 py_0 django 2.2.7 pypi_0 pypi docutils 0.15.2 py36_0 entrypoints 0.3 py36_0 gast 0.3.2 py_0 grpcio 1.25.0 pypi_0 pypi h5py 2.9.0 py36h5e291fa_0 hdf5 1.10.4 h7ebc959_0 icc_rt 2019.0.0 h0cc432a_1 icu 58.2 ha66f8fd_1 idna 2.8 pypi_0 pypi image 1.5.27 pypi_0 pypi imagesize 1.1.0 py36_0 importlib_metadata 0.23 py36_0 intel-openmp 2019.4 245 ipykernel 5.1.3 py36h39e3cac_0 ipython 7.9.0 py36h39e3cac_0 ipython_genutils 0.2.0 py36h3c5d0ee_0 isort 4.3.21 py36_0 jedi 0.15.1 py36_0 jinja2 2.10.3 py_0 jpeg 9b hb83a4c4_2 jsonschema 3.1.1 py36_0 jupyter_client 5.3.4 py36_0 jupyter_core 4.6.1 py36_0 keras-applications 1.0.8 py_0 keras-base 2.2.4 py36_0 keras-gpu 2.2.4 0 keras-preprocessing 1.1.0 py_1 keyring 18.0.0 py36_0 lazy-object-proxy 1.4.3 py36he774522_0 libpng 1.6.37 h2a8f88b_0 libprotobuf 3.9.2 h7bd577a_0 libsodium 1.0.16 h9d3ae62_0 markdown 3.1.1 py36_0 markupsafe 1.1.1 py36he774522_0 mccabe 0.6.1 py36_1 mistune 0.8.4 py36he774522_0 mkl 2019.4 245 mkl-service 2.3.0 py36hb782905_0 mkl_fft 1.0.15 py36h14836fe_0 mkl_random 1.1.0 py36h675688f_0 more-itertools 7.2.0 py36_0 nbconvert 5.6.1 py36_0 nbformat 4.4.0 py36h3a5bc1b_0 numpy 1.17.3 py36h4ceb530_0 numpy-base 1.17.3 py36hc3f5095_0 numpydoc 0.9.1 py_0 openssl 1.1.1d he774522_3 packaging 19.2 py_0 pandoc 2.2.3.2 0 pandocfilters 1.4.2 py36_1 parso 0.5.1 py_0 pickleshare 0.7.5 py36_0 pillow 6.2.1 pypi_0 pypi pip 19.3.1 py36_0 prompt_toolkit 2.0.10 py_0 protobuf 3.10.0 pypi_0 pypi psutil 5.6.3 py36he774522_0 pycodestyle 2.5.0 py36_0 pycparser 2.19 py36_0 pyflakes 2.1.1 py36_0 pygments 2.4.2 py_0 pylint 2.4.3 py36_0 pyopenssl 19.0.0 py36_0 pyparsing 2.4.2 py_0 pyqt 5.9.2 py36h6538335_2 pyreadline 2.1 py36_1 pyrsistent 0.15.4 py36he774522_0 pysocks 1.7.1 py36_0 python 3.6.9 h5500b2f_0 python-dateutil 2.8.1 py_0 pytz 2019.3 py_0 pywin32 223 py36hfa6e2cd_1 pyyaml 5.1.2 py36he774522_0 pyzmq 18.1.0 py36ha925a31_0 qt 5.9.7 vc14h73c81de_0 qtawesome 0.6.0 py_0 qtconsole 4.5.5 py_0 qtpy 1.9.0 py_0 requests 2.22.0 py36_0 rope 0.14.0 py_0 scipy 1.3.1 py36h29ff71c_0 setuptools 39.1.0 pypi_0 pypi sip 4.19.8 py36h6538335_0 six 1.13.0 pypi_0 pypi snowballstemmer 2.0.0 py_0 sphinx 2.2.1 py_0 sphinxcontrib-applehelp 1.0.1 py_0 sphinxcontrib-devhelp 1.0.1 py_0 sphinxcontrib-htmlhelp 1.0.2 py_0 sphinxcontrib-jsmath 1.0.1 py_0 sphinxcontrib-qthelp 1.0.2 py_0 sphinxcontrib-serializinghtml 1.1.3 py_0 spyder 3.3.6 py36_0 spyder-kernels 0.5.2 py36_0 sqlite 3.30.1 he774522_0 sqlparse 0.3.0 pypi_0 pypi tensorboard 1.10.0 py36he025d50_0 tensorflow 1.10.0 gpu_py36h3514669_0 tensorflow-base 1.10.0 gpu_py36h6e53903_0 tensorflow-gpu 1.10.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi testpath 0.4.2 py36_0 tornado 6.0.3 py36he774522_0 traitlets 4.3.3 py36_0 typed-ast 1.4.0 py36he774522_0 urllib3 1.25.6 pypi_0 pypi vc 14.1 h0510ff6_4 vs2015_runtime 14.16.27012 hf0eaf9b_0 wcwidth 0.1.7 py36h3d5aa90_0 webencodings 0.5.1 py36_1 werkzeug 0.16.0 py_0 wheel 0.33.6 py36_0 win_inet_pton 1.1.0 py36_0 wincertstore 0.2 py36h7fe50ca_0 wrapt 1.11.2 py36he774522_0 yaml 0.1.7 hc54c509_2 zeromq 4.3.1 h33f27b4_3 zipp 0.6.0 py_0 zlib 1.2.11 h62dcd97_3 ``` 2*RTX2080Ti driver 4.19.67
Tensorflow 2.0 : When using data tensors as input to a model, you should specify the `steps_per_epoch` argument.
下面代码每次执行到epochs 中的最后一个step 都会报错,请教大牛这是什么问题呢? ``` import tensorflow_datasets as tfds dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True, as_supervised=True) train_dataset,test_dataset = dataset['train'],dataset['test'] tokenizer = info.features['text'].encoder print('vocabulary size: ', tokenizer.vocab_size) sample_string = 'Hello world, tensorflow' tokenized_string = tokenizer.encode(sample_string) print('tokened id: ', tokenized_string) src_string= tokenizer.decode(tokenized_string) print(src_string) for t in tokenized_string: print(str(t) + ': '+ tokenizer.decode([t])) BUFFER_SIZE=6400 BATCH_SIZE=64 num_train_examples = info.splits['train'].num_examples num_test_examples=info.splits['test'].num_examples print("Number of training examples: {}".format(num_train_examples)) print("Number of test examples: {}".format(num_test_examples)) train_dataset=train_dataset.shuffle(BUFFER_SIZE) train_dataset=train_dataset.padded_batch(BATCH_SIZE,train_dataset.output_shapes) test_dataset=test_dataset.padded_batch(BATCH_SIZE,test_dataset.output_shapes) def get_model(): model=tf.keras.Sequential([ tf.keras.layers.Embedding(tokenizer.vocab_size,64), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)), tf.keras.layers.Dense(64,activation='relu'), tf.keras.layers.Dense(1,activation='sigmoid') ]) return model model =get_model() model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) import math #from tensorflow import keras #train_dataset= keras.preprocessing.sequence.pad_sequences(train_dataset, maxlen=BUFFER_SIZE) history =model.fit(train_dataset, epochs=2, steps_per_epoch=(math.ceil(BUFFER_SIZE/BATCH_SIZE) -90 ), validation_data= test_dataset) ``` Train on 10 steps Epoch 1/2 9/10 [==========================>...] - ETA: 3s - loss: 0.6955 - accuracy: 0.4479 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-111-8ddec076c096> in <module> 6 epochs=2, 7 steps_per_epoch=(math.ceil(BUFFER_SIZE/BATCH_SIZE) -90 ), ----> 8 validation_data= test_dataset) /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs) 726 max_queue_size=max_queue_size, 727 workers=workers, --> 728 use_multiprocessing=use_multiprocessing) 729 730 def evaluate(self, /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs) 672 validation_steps=validation_steps, 673 validation_freq=validation_freq, --> 674 steps_name='steps_per_epoch') 675 676 def evaluate(self, /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 437 validation_in_fit=True, 438 prepared_feed_values_from_dataset=(val_iterator is not None), --> 439 steps_name='validation_steps') 440 if not isinstance(val_results, list): 441 val_results = [val_results] /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs) 174 if not is_dataset: 175 num_samples_or_steps = _get_num_samples_or_steps(ins, batch_size, --> 176 steps_per_epoch) 177 else: 178 num_samples_or_steps = steps_per_epoch /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in _get_num_samples_or_steps(ins, batch_size, steps_per_epoch) 491 return steps_per_epoch 492 return training_utils.check_num_samples(ins, batch_size, steps_per_epoch, --> 493 'steps_per_epoch') 494 495 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_num_samples(ins, batch_size, steps, steps_name) 422 raise ValueError('If ' + steps_name + 423 ' is set, the `batch_size` must be None.') --> 424 if check_steps_argument(ins, steps, steps_name): 425 return None 426 /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_utils.py in check_steps_argument(input_data, steps, steps_name) 1199 raise ValueError('When using {input_type} as input to a model, you should' 1200 ' specify the `{steps_name}` argument.'.format( -> 1201 input_type=input_type_str, steps_name=steps_name)) 1202 return True 1203 ValueError: When using data tensors as input to a model, you should specify the `steps_per_epoch` argument.
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误
运行tensorflow时出现tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed这个错误,查了一下说是gpu被占用了,从下面这里开始出问题的: ``` 2019-10-17 09:28:49.495166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6382 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) (60000, 28, 28) (60000, 10) 2019-10-17 09:28:51.275415: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cublas64_100.dll'; dlerror: cublas64_100.dll not found ``` ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277238_292620.png) 最后显示的问题: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277311_655722.png) 试了一下网上的方法,比如加代码: ``` gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) ``` 但最后提示: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277460_72752.png) 现在不知道要怎么解决了。新手想试下简单的数字识别,步骤也是按教程一步步来的,可能用的版本和教程不一样,我用的是刚下的:2.0tensorflow和以下: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571277627_439100.png) 不知道会不会有版本问题,现在紧急求助各位大佬,还有没有其它可以尝试的方法。测试程序加法运算可以执行,数字识别图片运行的时候我看了下,GPU最大占有率才0.2%,下面是完整数字图片识别代码: ``` import os import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers, optimizers, datasets os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' #gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2) #sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) (x, y), (x_val, y_val) = datasets.mnist.load_data() x = tf.convert_to_tensor(x, dtype=tf.float32) / 255. y = tf.convert_to_tensor(y, dtype=tf.int32) y = tf.one_hot(y, depth=10) print(x.shape, y.shape) train_dataset = tf.data.Dataset.from_tensor_slices((x, y)) train_dataset = train_dataset.batch(200) model = keras.Sequential([ layers.Dense(512, activation='relu'), layers.Dense(256, activation='relu'), layers.Dense(10)]) optimizer = optimizers.SGD(learning_rate=0.001) def train_epoch(epoch): # Step4.loop for step, (x, y) in enumerate(train_dataset): with tf.GradientTape() as tape: # [b, 28, 28] => [b, 784] x = tf.reshape(x, (-1, 28 * 28)) # Step1. compute output # [b, 784] => [b, 10] out = model(x) # Step2. compute loss loss = tf.reduce_sum(tf.square(out - y)) / x.shape[0] # Step3. optimize and update w1, w2, w3, b1, b2, b3 grads = tape.gradient(loss, model.trainable_variables) # w' = w - lr * grad optimizer.apply_gradients(zip(grads, model.trainable_variables)) if step % 100 == 0: print(epoch, step, 'loss:', loss.numpy()) def train(): for epoch in range(30): train_epoch(epoch) if __name__ == '__main__': train() ``` 希望能有人给下建议或解决方法,拜谢!
keras利用callback获取的每个batch的acc数据精度不足
我想利用callback收集训练过程中每个batch的acc数据 但按batch收集的acc只有小数点后两位,按epoch收集的acc数据与就保留了小数点后很多位,按batch和epoch收集的loss数据都保留了小数点后很多位 代码如下 ``` class LossHistory(callbacks.Callback): def on_train_begin(self, logs={}): self.losses = {'batch': [], 'epoch': []} self.accuracy = {'batch': [], 'epoch': []} self.val_loss = {'batch': [], 'epoch': []} self.val_acc = {'batch': [], 'epoch': []} def on_batch_end(self, batch, logs={}): self.losses['batch'].append(logs.get('loss')) self.accuracy['batch'].append(logs.get('acc')) self.val_loss['batch'].append(logs.get('val_loss')) self.val_acc['batch'].append(logs.get('val_acc')) def on_epoch_end(self, batch, logs={}): self.losses['epoch'].append(logs.get('loss')) self.accuracy['epoch'].append(logs.get('acc')) self.val_loss['epoch'].append(logs.get('val_loss')) self.val_acc['epoch'].append(logs.get('val_acc')) def loss_plot(self, loss_type): iters = range(len(self.losses[loss_type])) plt.figure() # acc plt.plot(iters, self.accuracy[loss_type], 'r', label='train acc') # loss plt.plot(iters, self.losses[loss_type], 'g', label='train loss') if loss_type == 'epoch': # val_acc plt.plot(iters, self.val_acc[loss_type], 'b', label='val acc') # val_loss plt.plot(iters, self.val_loss[loss_type], 'k', label='val loss') plt.grid(True) plt.xlabel(loss_type) plt.ylabel('acc-loss') plt.legend(loc="upper right") plt.show() class Csr: def __init__(self,voc): self.model = Sequential() #B*L self.model.add(Embedding(voc.num_words, 300, mask_zero = True, weights = [voc.index2emb], trainable = False)) #B*L*256 self.model.add(GRU(256)) #B*256 self.model.add(Dropout(0.5)) self.model.add(Dense(1, activation='sigmoid')) #B*1 self.model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) print('compole complete') def train(self, x_train, y_train, b_s=50, epo=10): print('training.....') history = LossHistory() his = self.model.fit(x_train, y_train, batch_size=b_s, epochs=epo, callbacks=[history]) history.loss_plot('batch') print('training complete') return his, history ``` 程序运行结果如下: ![图片说明](https://img-ask.csdn.net/upload/201905/14/1557803291_621582.png) ![图片说明](https://img-ask.csdn.net/upload/201905/14/1557803304_240896.png)
迁移学习中进行医学影像分析,训练神经网络后accuracy保持不变。。。
使用的是vgg16的finetune网络,网络权重从keras中导入的,最上层有三层的一个小训练器的权重是由训练习得的。训练集大约300个样本,验证集大约80个样本,但程序运行后,第一二个epoch之间loss、acc还有变化,之后就不再变化,而且验证集的准确度一直接近于零。。想请有关卷积神经网络和机器学习方面的大神帮忙看一下是哪里出了问题 import keras from keras.models import Sequential from keras.layers import Dense,Dropout,Activation,Flatten from keras.layers import GlobalAveragePooling2D import numpy as np from keras.optimizers import RMSprop from keras.utils import np_utils import matplotlib.pyplot as plt from keras import regularizers from keras.applications.vgg16 import VGG16 from keras import optimizers from keras.layers.core import Lambda from keras import backend as K from keras.models import Model #写一个LossHistory类(回调函数),保存loss和acc,在keras下画图 class LossHistory(keras.callbacks.Callback): def on_train_begin(self, logs={}):#在每个batch的开始处(on_batch_begin):logs包含size,即当前batch的样本数 self.losses = {'batch':[], 'epoch':[]} self.accuracy = {'batch':[], 'epoch':[]} self.val_loss = {'batch':[], 'epoch':[]} self.val_acc = {'batch':[], 'epoch':[]} def on_batch_end(self, batch, logs={}): self.losses['batch'].append(logs.get('loss')) self.accuracy['batch'].append(logs.get('acc')) self.val_loss['batch'].append(logs.get('val_loss')) self.val_acc['batch'].append(logs.get('val_acc')) def on_epoch_end(self, batch, logs={}):#每迭代完一次从log中取得数据 self.losses['epoch'].append(logs.get('loss')) self.accuracy['epoch'].append(logs.get('acc')) self.val_loss['epoch'].append(logs.get('val_loss')) self.val_acc['epoch'].append(logs.get('val_acc')) def loss_plot(self, loss_type): iters = range(len(self.losses[loss_type])) #绘图的横坐标? plt.figure() #建立一个空的画布 if loss_type == 'epoch': plt.subplot(211) plt.plot(iters,self.accuracy[loss_type],'r',label='train acc') plt.plot(iters,self.val_acc[loss_type],'b',label='val acc') # val_acc用蓝色线表示 plt.grid(True) plt.xlabel(loss_type) plt.ylabel('accuracy') plt.show() plt.subplot(212) plt.plot(iters, self.losses[loss_type], 'r', label='train loss') # val_acc 用蓝色线表示 plt.plot(iters, self.val_loss[loss_type], 'b', label='val loss') # val_loss 用黑色线表示 plt.xlabel(loss_type) plt.ylabel('loss') plt.legend(loc="upper right") #把多个axs的图例放在一张图上,loc表示位置 plt.show() print(np.mean(self.val_acc[loss_type])) print(np.std(self.val_acc[loss_type])) seed = 7 np.random.seed(seed) #训练网络的几个参数 batch_size=32 num_classes=2 epochs=100 weight_decay=0.0005 learn_rate=0.0001 #读入训练、测试数据,改变大小,显示基本信息 X_train=np.load(open('/image_BRATS_240_240_3_normal.npy',mode='rb')) Y_train=np.load(open('/label_BRATS_240_240_3_normal.npy',mode='rb')) Y_train = keras.utils.to_categorical(Y_train, 2) #搭建神经网络 model_vgg16=VGG16(include_top=False,weights='imagenet',input_shape=(240,240,3),classes=2) model_vgg16.layers.pop() model=Sequential() model.add(model_vgg16) model.add(Flatten(input_shape=X_train.shape[1:])) model.add(Dense(436,activation='relu')) #return x*10的向量 model.add(Dense(2,activation='softmax')) #model(inputs=model_vgg16.input,outputs=predictions) for layer in model_vgg16.layers[:13]: layer.trainable=False model_vgg16.summary() model.compile(optimizer=RMSprop(lr=learn_rate,decay=weight_decay), loss='categorical_crossentropy', metrics=['accuracy']) model.summary() history=LossHistory() model.fit(X_train,Y_train, batch_size=batch_size,epochs=epochs, verbose=1, shuffle=True, validation_split=0.2, callbacks=[history]) #模型评估 history.loss_plot('epoch') 比如: ![实验运行结果:](https://img-ask.csdn.net/upload/201804/19/1524134477_869793.png)
keras多GPU训练,其中一块无法调用
已经按multi_gpu_model进行了设置 但是运行的时候还是只能调用一个GPU,另一张计算卡完全没用,是什么原因呢? ``` from keras.utils import multi_gpu_model ... model = build_model() optimizer = keras.optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-06) model_parallel=multi_gpu_model(model,2) model_parallel.compile(loss='mse', optimizer=optimizer, metrics=['mae']) ... history = model_parallel.fit(train_data, y_train, epochs=EPOCHS, validation_split=0.2, verbose=1,callbacks=[PrintDot()]) ``` ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573813427_19477.jpg) ![图片说明](https://img-ask.csdn.net/upload/201911/15/1573813436_730710.jpg)
基于keras,使用imagedatagenerator.flow函数读入数据,训练集ACC极低
在做字符识别的神经网络,数据集是用序号标好名称的图片,标签取图片的文件名。想用Imagedatagenrator 函数和flow函数,增加样本的泛化性,然后生成数据传入网络,可是这样acc=1/类别数,基本为零。请问哪里出了问题 ``` datagen = ImageDataGenerator( width_shift_range=0.1, height_shift_range=0.1 ) def read_train_image(self, name): myimg = Image.open(name).convert('RGB') return np.array(myimg) def train(self): #训练集 train_img_list = [] train_label_list = [] #测试集 test_img_list = [] test_label_list = [] for file in os.listdir('train'): files_img_in_array = self.read_train_image(name='train/' + file) train_img_list.append(files_img_in_array) # Image list add up train_label_list.append(int(file.split('_')[0])) # lable list addup for file in os.listdir('test'): files_img_in_array = self.read_train_image(name='test/' + file) test_img_list.append(files_img_in_array) # Image list add up test_label_list.append(int(file.split('_')[0])) # lable list addup train_img_list = np.array(train_img_list) train_label_list = np.array(train_label_list) test_img_list = np.array(train_img_list) test_label_list = np.array(train_label_list) train_label_list = np_utils.to_categorical(train_label_list, 5788) test_label_list = np_utils.to_categorical(test_label_list, 5788) train_img_list = train_img_list.astype('float32') test_img_list = test_img_list.astype('float32') test_img_list /= 255.0 train_img_list /= 255.0 ``` 这是图片数据的处理,图片和标签都存到list里。下面是用fit_genrator训练 ``` model.fit_generator( self.datagen.flow(x=train_img_list, y=train_label_list, batch_size=2), samples_per_epoch=len(train_img_list), epochs=10, validation_data=(test_img_list,test_label_list), ) ```
keras下self-attention和Recall, F1-socre值实现问题?
麻烦大神帮忙看一下: (1)为何返回不了Precise, Recall, F1-socre值? (2)为何在CNN前加了self-attention层,训练后的acc反而降低在0.78上下? 【研一小白求详解,万分感谢大神】 ``` import os #导入os模块,用于确认文件是否存在 import numpy as np from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.callbacks import Callback from sklearn.metrics import f1_score, precision_score, recall_score maxlen = 380#句子长截断为100 training_samples = 20000#在 200 个样本上训练 validation_samples = 5000#在 10 000 个样本上验证 max_words = 10000#只考虑数据集中前 10 000 个最常见的单词 def dataProcess(): imdb_dir = 'data/aclImdb'#基本路径,经常要打开这个 #处理训练集 train_dir = os.path.join(imdb_dir, 'train')#添加子路径 train_labels = [] train_texts = [] for label_type in ['neg', 'pos']: dir_name = os.path.join(train_dir, label_type) for fname in os.listdir(dir_name):#获取目录下所有文件名字 if fname[-4:] == '.txt': f = open(os.path.join(dir_name, fname),'r',encoding='utf8') train_texts.append(f.read()) f.close() if label_type == 'neg': train_labels.append(0) else:train_labels.append(1) #处理测试集 test_dir = os.path.join(imdb_dir, 'test') test_labels = [] test_texts = [] for label_type in ['neg', 'pos']: dir_name = os.path.join(test_dir, label_type) for fname in sorted(os.listdir(dir_name)): if fname[-4:] == '.txt': f = open(os.path.join(dir_name, fname),'r',encoding='utf8') test_texts.append(f.read()) f.close() if label_type == 'neg': test_labels.append(0) else: test_labels.append(1) #对数据进行分词和划分训练集和数据集 tokenizer = Tokenizer(num_words=max_words) tokenizer.fit_on_texts(train_texts)#构建单词索引结构 sequences = tokenizer.texts_to_sequences(train_texts)#整数索引的向量化模型 word_index = tokenizer.word_index#索引字典 print('Found %s unique tokens.' % len(word_index)) data = pad_sequences(sequences, maxlen=maxlen) train_labels = np.asarray(train_labels)#把列表转化为数组 print('Shape of data tensor:', data.shape) print('Shape of label tensor:', train_labels.shape) indices = np.arange(data.shape[0])#评论顺序0,1,2,3 np.random.shuffle(indices)#把评论顺序打乱3,1,2,0 data = data[indices] train_labels = train_labels[indices] x_train = data[:training_samples] y_train = train_labels[:training_samples] x_val = data[training_samples: training_samples + validation_samples] y_val = train_labels[training_samples: training_samples + validation_samples] #同样需要将测试集向量化 test_sequences = tokenizer.texts_to_sequences(test_texts) x_test = pad_sequences(test_sequences, maxlen=maxlen) y_test = np.asarray(test_labels) return x_train,y_train,x_val,y_val,x_test,y_test,word_index embedding_dim = 100#特征数设为100 #"""将预训练的glove词嵌入文件,构建成可以加载到embedding层中的嵌入矩阵""" def load_glove(word_index):#导入glove的词向量 embedding_file='data/glove.6B' embeddings_index={}#定义字典 f = open(os.path.join(embedding_file, 'glove.6B.100d.txt'),'r',encoding='utf8') for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs f.close() # """转化为矩阵:构建可以加载到embedding层中的嵌入矩阵,形为(max_words(单词数), embedding_dim(向量维数)) """ embedding_matrix = np.zeros((max_words, embedding_dim)) for word, i in word_index.items():#字典里面的单词和索引 if i >= max_words:continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector return embedding_matrix if __name__ == '__main__': x_train, y_train, x_val, y_val,x_test,y_test, word_index = dataProcess() embedding_matrix=load_glove(word_index) #可以把得到的嵌入矩阵保存起来,方便后面fine-tune""" # #保存 from keras.models import Sequential from keras.layers.core import Dense,Dropout,Activation,Flatten from keras.layers.recurrent import LSTM from keras.layers import Embedding from keras.layers import Bidirectional from keras.layers import Conv1D, MaxPooling1D import keras from keras_self_attention import SeqSelfAttention model = Sequential() model.add(Embedding(max_words, embedding_dim, input_length=maxlen)) model.add(SeqSelfAttention(attention_activation='sigmod')) model.add(Conv1D(filters = 64, kernel_size = 5, padding = 'same', activation = 'relu')) model.add(MaxPooling1D(pool_size = 4)) model.add(Dropout(0.25)) model.add(Bidirectional(LSTM(64,activation='tanh',dropout=0.2,recurrent_dropout=0.2))) model.add(Dense(256, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(1, activation='sigmoid')) model.summary() model.layers[0].set_weights([embedding_matrix]) model.layers[0].trainable = False model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc']) class Metrics(Callback): def on_train_begin(self, logs={}): self.val_f1s = [] self.val_recalls = [] self.val_precisions = [] def on_epoch_end(self, epoch, logs={}): val_predict = (np.asarray(self.model.predict(self.validation_data[0]))).round() val_targ = self.validation_data[1] _val_f1 = f1_score(val_targ, val_predict) _val_recall = recall_score(val_targ, val_predict) _val_precision = precision_score(val_targ, val_predict) self.val_f1s.append(_val_f1) self.val_recalls.append(_val_recall) self.val_precisions.append(_val_precision) return metrics = Metrics() history = model.fit(x_train, y_train, epochs=10, batch_size=32, validation_data=(x_val, y_val), callbacks=[metrics]) model.save_weights('pre_trained_glove_model.h5')#保存结果 ```
tensorflow.python.framework.errors_impl.UnimplementedError: Fused conv implementation does not support grouped convolutions for now. [[{{node conv2d_11/BiasAdd}}]]
代码如下: ``` from imageai.Prediction import ImagePrediction import os execution_path = os.getcwd() prediction = ImagePrediction() prediction.setModelTypeAsResNet() prediction.setModelPath(os.path.join(execution_path,"h5model/resnet50_weights_tf_dim_ordering_tf_kernels.h5")) prediction.loadModel() predictions, probabilities = prediction.predictImage(os.path.join(execution_path,"1.jpg")) for eachPrediction, eachProbability in zip(predictions,probabilities): print(eachPrediction, ":", eachProbability) ``` 报错内容如下 ![](https://img-ask.csdn.net/upload/201910/24/1571911073_718017.png) 环境: tensorflow-gpu 2.0.0 scipy 1.3.1 keras 2.1.5 在stackoverflow找到了类似错误,作者说是没有进行图片黑白化,但是我黑白化之后还是会报这样的错误,求教各位大神。
使用keras画出模型准确率评估的执行结果时出现:
建立好深度学习的模型后,使用反向传播法进行训练。 定义了训练方式: ``` model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy']) ``` 执行训练: ``` train_history =model.fit(x=x_Train_normalize, y=y_Train_OneHot,validation_split=0.2, epochs=10,batch_size=200,verbose=2) ``` 执行后出现: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571243584_952792.png) 建立show_train_history显示训练过程: ``` import matplotlib.pyplot as plt def show_train_history(train_history,train,validation): plt.plot(train_history.history[train]) plt.plot(train_history.history[validation]) plt.title('Train History') plt.ylabel(train) plt.xlabel('Epoch') plt.legend(['train','validation'],loc='upper left') plt.show() ``` 画出准确率执行结果: ``` show_train_history(train_history,'acc','val_acc') ``` 结果出现以下问题: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571243832_179270.png) 这是怎么回事呀? 求求大佬救救孩子555
做keras的可视化时utils.apply_modifications出错
**#(1)用mnist文件生成了model.h5文件:** import numpy as np import keras from keras.datasets import mnist from keras.models import Sequential,Model from keras.layers import Dense,Dropout,Flatten,Activation,Input from keras.layers import Conv2D,MaxPooling2D from keras import backend as K batch_size=128 num_classes=10 epochs=5 #定义图像的长宽 img_rows,img_cols=28,28 #加载mnist数据集 (x_train,y_train),(x_test,y_test)=mnist.load_data() #定义图像的格式 x_train=x_train.reshape(x_train.shape[0],img_rows,img_cols,1) x_test=x_test.reshape(x_test.shape[0],img_rows,img_cols,1) input_shape=(img_rows,img_cols,1) x_train=x_train.astype('float32') x_test=x_test.astype('float32') x_train/=255 x_test/=255 print('x_train shape:',x_train.shape) print(x_train.shape[0],'train samples') print(x_test.shape[0],'test samples') y_train=keras.utils.to_categorical(y_train,num_classes) y_test=keras.utils.to_categorical(y_test,num_classes) #开始DNN网络 model=Sequential() model.add(Conv2D(32,kernel_size=(3,3),activation='relu',input_shape=input_shape)) model.add(Conv2D(54,(3,3),activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes,activation='softmax',name='preds')) model.compile(loss=keras.losses.categorical_crossentropy,optimizer=keras.optimizers.Adam(),metrics=['accuracy']) model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(x_test,y_test)) score=model.evaluate(x_test,y_test,verbose=0) print('Test loss:',score[0]) print('Test accuracy:',score[1]) model.save('model.h5') **#(2)用生成的mnist文件做测试:** from keras.models import load_model from vis.utils import utils from keras import activations model=load_model('model.h5') layer_idx=utils.find_layer_idx(model,'preds') model.layers[layer_idx].activation=activations.linear model = utils.apply_modifications(model) 报错:FileNotFoundError: [WinError 3] 系统找不到指定的路径。: '/tmp/curzzxs_.h5'
keras结果ACC: 1.0000 Recall: 1.0000 F1-score: 1.0000 Precesion: 1.0000的原因?
用keras做的图像2分类,仅仅跑了5个epoch, 结果: [[205 0] [ 0 28]] keras的AUC为: 1.0 AUC: 1.0000 ACC: 1.0000 Recall: 1.0000 F1-score: 1.0000 Precesion: 1.0000 代码: data = np.load('.npz') image_data, label_data= data['image'], data['label'] skf = StratifiedKFold(n_splits=3, shuffle=True) for train, test in skf.split(image_data, label_data): train_x=image_data[train] test_x=image_data[test] train_y=label_data[train] test_y=label_data[test] train_x = np.array(train_x) test_x = np.array(test_x) train_x = train_x.reshape(train_x.shape[0],1,28,28) test_x = test_x.reshape(test_x.shape[0],1,28,28) train_x = train_x.astype('float32') test_x = test_x.astype('float32') train_x /=255 test_x /=255 train_y = np.array(train_y) test_y = np.array(test_y) model.compile(optimizer='rmsprop',loss="binary_crossentropy",metrics=["accuracy"]) model.fit(train_x, train_y,batch_size=64,epochs=5,verbose=1,validation_data=(test_x, test_y)]) 从结果看,代码存在离谱的错误,请教各位专家,错在哪?谢谢
pycharm使用keras出现进度条信息多行打印
最近在用pycharm运行keras方面的代码时,会出现进度条多行打印问题,不知道是什么原因,但是我把代码放在Spyder上运行时,进度条是正常单行更新的,代码是深度学习的一个例程。在百度上也没搜到好的解决方法,恳请大家能帮忙解决这个问题, ``` from keras import layers,models from keras.datasets import mnist from keras.utils import to_categorical (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images = train_images.reshape((60000,28,28,1)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28,28,1)) test_images = test_images.astype('float32')/255 train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) model = models.Sequential() model.add(layers.Conv2D(32,(3,3),activation='relu',input_shape=(28,28,1))) model.add(layers.MaxPool2D(2,2)) model.add(layers.Conv2D(64,(3,3),activation='relu')) model.add(layers.MaxPool2D(2,2)) model.add(layers.Conv2D(64,(3,3),activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64,activation='relu')) model.add(layers.Dense(10,activation='softmax')) model.summary() model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(train_images,train_labels,epochs=6,batch_size=64) #test_loss,test_acc = model.evaluate(test_images,test_labels) # print(test_loss,test_acc) ``` ![图片说明](https://img-ask.csdn.net/upload/201910/07/1570448232_727191.png)
mlp 如何加载 doc2vec( .d2c)模型数据进行训练
mlp模型如下: ``` def MySimpleMLP(feature=700, vec_size=50): auc_roc = LSTM.as_keras_metric(tf.compat.v1.metrics.auc) model = Sequential() model.add(Flatten()) model.add(Dense(32, activation='relu', input_shape=(52,))) model.add(Dropout(0.2)) model.add(Dense(32, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(1, activation='softmax')) # compile model model.compile(loss="binary_crossentropy", optimizer="adam", metrics=[auc_roc]) return model ``` 训练函数如下: ``` model.fit(trainData, trainLabel, validation_split=0.2, epochs=10, batch_size=64, verbose=2) ``` do2vec模型是基于 imdb_50.d2v。 跪求各位大佬。
keras 运行cnn时报内存错误
如题,我早先自学的是tf,昨天入了一下keras的坑,没用服务器,用我这个丐版的联想本装了一个基于theano的keras,一开始跑了一个全连接的神经网络,没啥问题。然后又做了一个很小的cnn,(代码如下),能够用 model.summary()输出网络的结构,但是运行起来就会弹出信息框报错: 代码: ``` import keras import numpy as np from keras.models import load_model input1=keras.layers.Input(shape=(25,)) x=keras.layers.Reshape([5,5,1])(input1) x1=keras.layers.Conv2D(filters=2,kernel_size=(2,2),strides=(1,1),padding='valid',activation='elu')(x) x2=keras.layers.MaxPooling2D(pool_size=(2,2),strides=(1,1),padding='valid')(x1) x3=keras.layers.Conv2D(filters=4,kernel_size=(2,2),strides=(1,1),padding='valid',activation='elu')(x2) x4=keras.layers.AveragePooling2D(pool_size=(2,2),strides=(1,1),padding='valid')(x3) x5=keras.layers.Reshape([4*4*2,])(x1) xx=keras.layers.Dense(1,activation='elu')(x5) model=keras.models.Model(inputs=input1,outputs=xx) model.summary() model.compile(loss='mse', optimizer='sgd') def data(): data=np.random.randint(0,2,[1,25]) return(data) def num(data): data=np.reshape(data,[25]) sum_=0 for i in data: sum_=sum_+i if sum_>10: result=[[1]] else: result=[[0]] return(result) while True: for i in range(100): x=data() y=num(x) cost = model.train_on_batch([x], [y]) print(i) x=data() y=num(x) cost = model.evaluate(x, y) print('loss=',cost) x=data() y=num(x) print('x=',x) print('y=',y) Y_pred = model.predict(x) print(Y_pred) words=input('continue??\::') if words=='n': break ``` 可以输出模型的结构![图片说明](https://img-ask.csdn.net/upload/202001/07/1578376564_807468.png) 但是再往下运行,就会弹出信息框报错: ![图片说明](https://img-ask.csdn.net/upload/202001/07/1578376772_416127.png) 请问各位高手有何高见 我的电脑是xp系统,32位,内存不到1G(老掉牙的耍着玩),装的是python 2.7.15,numpy(1.16.6),scipy(1.2.2),theano(1.0.4),keras(2.3.1) 勿喷,一般都是在服务器上写tf,这台电脑纯属娱乐。。 求教求教。。。
相见恨晚的超实用网站
搞学习 知乎:www.zhihu.com 简答题:http://www.jiandati.com/ 网易公开课:https://open.163.com/ted/ 网易云课堂:https://study.163.com/ 中国大学MOOC:www.icourse163.org 网易云课堂:study.163.com 哔哩哔哩弹幕网:www.bilibili.com 我要自学网:www.51zxw
爬虫福利二 之 妹子图网MM批量下载
爬虫福利一:27报网MM批量下载    点击 看了本文,相信大家对爬虫一定会产生强烈的兴趣,激励自己去学习爬虫,在这里提前祝:大家学有所成! 目标网站:妹子图网 环境:Python3.x 相关第三方模块:requests、beautifulsoup4 Re:各位在测试时只需要将代码里的变量 path 指定为你当前系统要保存的路径,使用 python xxx.py 或IDE运行即可。
字节跳动视频编解码面经
引言 本文主要是记录一下面试字节跳动的经历。 三四月份投了字节跳动的实习(图形图像岗位),然后hr打电话过来问了一下会不会opengl,c++,shador,当时只会一点c++,其他两个都不会,也就直接被拒了。 七月初内推了字节跳动的提前批,因为内推没有具体的岗位,hr又打电话问要不要考虑一下图形图像岗,我说实习投过这个岗位不合适,不会opengl和shador,然后hr就说秋招更看重基础。我当时
开源一个功能完整的SpringBoot项目框架
福利来了,给大家带来一个福利。 最近想了解一下有关Spring Boot的开源项目,看了很多开源的框架,大多是一些demo或者是一个未成形的项目,基本功能都不完整,尤其是用户权限和菜单方面几乎没有完整的。 想到我之前做的框架,里面通用模块有:用户模块,权限模块,菜单模块,功能模块也齐全了,每一个功能都是完整的。 打算把这个框架分享出来,供大家使用和学习。 为什么用框架? 框架可以学习整体
源码阅读(19):Java中主要的Map结构——HashMap容器(下1)
(接上文《源码阅读(18):Java中主要的Map结构——HashMap容器(中)》) 3.4.4、HashMap添加K-V键值对(红黑树方式) 上文我们介绍了在HashMap中table数组的某个索引位上,基于单向链表添加新的K-V键值对对象(HashMap.Node&lt;K, V&gt;类的实例),但是我们同时知道在某些的场景下,HashMap中table数据的某个索引位上,数据是按照红黑树
c++制作的植物大战僵尸,开源,一代二代结合游戏
    此游戏全部由本人自己制作完成。游戏大部分的素材来源于原版游戏素材,少部分搜集于网络,以及自己制作。 此游戏为同人游戏而且仅供学习交流使用,任何人未经授权,不得对本游戏进行更改、盗用等,否则后果自负。 目前有六种僵尸和六种植物,植物和僵尸的动画都是本人做的。qq:2117610943 开源代码下载 提取码:3vzm 点击下载--&gt; 11月28日 新增四种植物 统一植物画风,全部修
Java学习的正确打开方式
在博主认为,对于入门级学习java的最佳学习方法莫过于视频+博客+书籍+总结,前三者博主将淋漓尽致地挥毫于这篇博客文章中,至于总结在于个人,实际上越到后面你会发现学习的最好方式就是阅读参考官方文档其次就是国内的书籍,博客次之,这又是一个层次了,这里暂时不提后面再谈。博主将为各位入门java保驾护航,各位只管冲鸭!!!上天是公平的,只要不辜负时间,时间自然不会辜负你。 何谓学习?博主所理解的学习,它
程序员必须掌握的核心算法有哪些?
由于我之前一直强调数据结构以及算法学习的重要性,所以就有一些读者经常问我,数据结构与算法应该要学习到哪个程度呢?,说实话,这个问题我不知道要怎么回答你,主要取决于你想学习到哪些程度,不过针对这个问题,我稍微总结一下我学过的算法知识点,以及我觉得值得学习的算法。这些算法与数据结构的学习大多数是零散的,并没有一本把他们全部覆盖的书籍。下面是我觉得值得学习的一些算法以及数据结构,当然,我也会整理一些看过
Python——画一棵漂亮的樱花树(不同种樱花+玫瑰+圣诞树喔)
最近翻到一篇知乎,上面有不少用Python(大多是turtle库)绘制的树图,感觉很漂亮,我整理了一下,挑了一些我觉得不错的代码分享给大家(这些我都测试过,确实可以生成喔~) one 樱花树 动态生成樱花 效果图(这个是动态的): 实现代码 import turtle as T import random import time # 画樱花的躯干(60,t) def Tree(branch
linux系列之常用运维命令整理笔录
本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍了,慢慢积累一些其它拓展的命令,博客不定时更新 顺便拉下票,我在参加csdn博客之星竞选,欢迎投票支持,每个QQ或者微信每天都可以投5票,扫二维码即可,http://m234140.nofollow.ax.
Python 基础(一):入门必备知识
目录1 标识符2 关键字3 引号4 编码5 输入输出6 缩进7 多行8 注释9 数据类型10 运算符10.1 常用运算符10.2 运算符优先级 1 标识符 标识符是编程时使用的名字,用于给变量、函数、语句块等命名,Python 中标识符由字母、数字、下划线组成,不能以数字开头,区分大小写。 以下划线开头的标识符有特殊含义,单下划线开头的标识符,如:_xxx ,表示不能直接访问的类属性,需通过类提供
深度学习图像算法在内容安全领域的应用
互联网给人们生活带来便利的同时也隐含了大量不良信息,防范互联网平台有害内容传播引起了多方面的高度关注。本次演讲从技术层面分享网易易盾在内容安全领域的算法实践经验,包括深度学习图
程序员接私活怎样防止做完了不给钱?
首先跟大家说明一点,我们做 IT 类的外包开发,是非标品开发,所以很有可能在开发过程中会有这样那样的需求修改,而这种需求修改很容易造成扯皮,进而影响到费用支付,甚至出现做完了项目收不到钱的情况。 那么,怎么保证自己的薪酬安全呢? 我们在开工前,一定要做好一些证据方面的准备(也就是“讨薪”的理论依据),这其中最重要的就是需求文档和验收标准。一定要让需求方提供这两个文档资料作为开发的基础。之后开发
网页实现一个简单的音乐播放器(大佬别看。(⊙﹏⊙))
今天闲着无事,就想写点东西。然后听了下歌,就打算写个播放器。 于是乎用h5 audio的加上js简单的播放器完工了。 欢迎 改进 留言。 演示地点跳到演示地点 html代码如下`&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;music&lt;/title&gt; &lt;meta charset="utf-8"&gt
Python十大装B语法
Python 是一种代表简单思想的语言,其语法相对简单,很容易上手。不过,如果就此小视 Python 语法的精妙和深邃,那就大错特错了。本文精心筛选了最能展现 Python 语法之精妙的十个知识点,并附上详细的实例代码。如能在实战中融会贯通、灵活使用,必将使代码更为精炼、高效,同时也会极大提升代码B格,使之看上去更老练,读起来更优雅。 1. for - else 什么?不是 if 和 else 才
数据库优化 - SQL优化
前面一篇文章从实例的角度进行数据库优化,通过配置一些参数让数据库性能达到最优。但是一些“不好”的SQL也会导致数据库查询变慢,影响业务流程。本文从SQL角度进行数据库优化,提升SQL运行效率。 判断问题SQL 判断SQL是否有问题时可以通过两个表象进行判断: 系统级别表象 CPU消耗严重 IO等待严重 页面响应时间过长
2019年11月中国大陆编程语言排行榜
2019年11月2日,我统计了某招聘网站,获得有效程序员招聘数据9万条。针对招聘信息,提取编程语言关键字,并统计如下: 编程语言比例 rank pl_ percentage 1 java 33.62% 2 c/c++ 16.42% 3 c_sharp 12.82% 4 javascript 12.31% 5 python 7.93% 6 go 7.25% 7
通俗易懂地给女朋友讲:线程池的内部原理
餐厅的约会 餐盘在灯光的照耀下格外晶莹洁白,女朋友拿起红酒杯轻轻地抿了一小口,对我说:“经常听你说线程池,到底线程池到底是个什么原理?”我楞了一下,心里想女朋友今天是怎么了,怎么突然问出这么专业的问题,但做为一个专业人士在女朋友面前也不能露怯啊,想了一下便说:“我先给你讲讲我前同事老王的故事吧!” 大龄程序员老王 老王是一个已经北漂十多年的程序员,岁数大了,加班加不动了,升迁也无望,于是拿着手里
经典算法(5)杨辉三角
杨辉三角 是经典算法,这篇博客对它的算法思想进行了讲解,并有完整的代码实现。
腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹?
昨天,有网友私信我,说去阿里面试,彻底的被打击到了。问了为什么网上大量使用ThreadLocal的源码都会加上private static?他被难住了,因为他从来都没有考虑过这个问题。无独有偶,今天笔者又发现有网友吐槽了一道腾讯的面试题,我们一起来看看。 腾讯算法面试题:64匹马8个跑道需要多少轮才能选出最快的四匹? 在互联网职场论坛,一名程序员发帖求助到。二面腾讯,其中一个算法题:64匹
面试官:你连RESTful都不知道我怎么敢要你?
面试官:了解RESTful吗? 我:听说过。 面试官:那什么是RESTful? 我:就是用起来很规范,挺好的 面试官:是RESTful挺好的,还是自我感觉挺好的 我:都挺好的。 面试官:… 把门关上。 我:… 要干嘛?先关上再说。 面试官:我说出去把门关上。 我:what ?,夺门而去 文章目录01 前言02 RESTful的来源03 RESTful6大原则1. C-S架构2. 无状态3.统一的接
为啥国人偏爱Mybatis,而老外喜欢Hibernate/JPA呢?
关于SQL和ORM的争论,永远都不会终止,我也一直在思考这个问题。昨天又跟群里的小伙伴进行了一番讨论,感触还是有一些,于是就有了今天这篇文。 声明:本文不会下关于Mybatis和JPA两个持久层框架哪个更好这样的结论。只是摆事实,讲道理,所以,请各位看官勿喷。 一、事件起因 关于Mybatis和JPA孰优孰劣的问题,争论已经很多年了。一直也没有结论,毕竟每个人的喜好和习惯是大不相同的。我也看
项目中的if else太多了,该怎么重构?
介绍 最近跟着公司的大佬开发了一款IM系统,类似QQ和微信哈,就是聊天软件。我们有一部分业务逻辑是这样的 if (msgType = "文本") { // dosomething } else if(msgType = "图片") { // doshomething } else if(msgType = "视频") { // doshomething } else { // doshom...
致 Python 初学者
欢迎来到“Python进阶”专栏!来到这里的每一位同学,应该大致上学习了很多 Python 的基础知识,正在努力成长的过程中。在此期间,一定遇到了很多的困惑,对未来的学习方向感到迷茫。我非常理解你们所面临的处境。我从2007年开始接触 python 这门编程语言,从2009年开始单一使用 python 应对所有的开发工作,直至今天。回顾自己的学习过程,也曾经遇到过无数的困难,也曾经迷茫过、困惑过。开办这个专栏,正是为了帮助像我当年一样困惑的 Python 初学者走出困境、快速成长。希望我的经验能真正帮到你
Python 编程实用技巧
Python是一门很灵活的语言,也有很多实用的方法,有时候实现一个功能可以用多种方法实现,我这里总结了一些常用的方法,并会持续更新。
“狗屁不通文章生成器”登顶GitHub热榜,分分钟写出万字形式主义大作
一、垃圾文字生成器介绍 最近在浏览GitHub的时候,发现了这样一个骨骼清奇的雷人项目,而且热度还特别高。 项目中文名:狗屁不通文章生成器 项目英文名:BullshitGenerator 根据作者的介绍,他是偶尔需要一些中文文字用于GUI开发时测试文本渲染,因此开发了这个废话生成器。但由于生成的废话实在是太过富于哲理,所以最近已经被小伙伴们给玩坏了。 他的文风可能是这样的: 你发现,...
程序员:我终于知道post和get的区别
是一个老生常谈的话题,然而随着不断的学习,对于以前的认识有很多误区,所以还是需要不断地总结的,学而时习之,不亦说乎
"狗屁不通文章生成器"登顶GitHub热榜,分分钟写出万字形式主义大作
GitHub 被誉为全球最大的同性交友网站,……,陪伴我们已经走过 10+ 年时间,它托管了大量的软件代码,同时也承载了程序员无尽的欢乐。 万字申请,废话报告,魔幻形式主义大作怎么写?兄dei,狗屁不通文章生成器了解一下。这个富有灵魂的项目名吸引了众人的目光。项目仅仅诞生一周,便冲上了GitHub趋势榜榜首(Js中文网 -前端进阶资源教程)、是榜首哦
推荐几款比较实用的工具,网站
1.盘百度PanDownload 这个云盘工具是免费的,可以进行资源搜索,提速(偶尔会抽风????) 不要去某站买付费的???? PanDownload下载地址 2.BeJSON 这是一款拥有各种在线工具的网站,推荐它的主要原因是网站简洁,功能齐全,广告相比其他广告好太多了 bejson网站 3.二维码美化 这个网站的二维码美化很好看,网站界面也很...
《程序人生》系列-这个程序员只用了20行代码就拿了冠军
你知道的越多,你不知道的越多 点赞再看,养成习惯GitHub上已经开源https://github.com/JavaFamily,有一线大厂面试点脑图,欢迎Star和完善 前言 这一期不算《吊打面试官》系列的,所有没前言我直接开始。 絮叨 本来应该是没有这期的,看过我上期的小伙伴应该是知道的嘛,双十一比较忙嘛,要值班又要去帮忙拍摄年会的视频素材,还得搞个程序员一天的Vlog,还要写BU...
相关热词 c# plc s1200 c#里氏转换原则 c# 主界面 c# do loop c#存为组套 模板 c# 停掉协程 c# rgb 读取图片 c# 图片颜色调整 最快 c#多张图片上传 c#密封类与密封方法
立即提问