keras怎么改输入的维度

是一个多分类问题,我现在读取出来的数据input.shape=(40000,1,576,2)
我想让它最后的层unit是8。
请问我应该怎么加一些代码呢

1个回答

可以先reshape成40000,1152
也就是输入数据40000条,输入维度1152
然后用Dense层,可以用一个也可以用多个,最后输出到8
本质上你这个属于降维,也可以google下github上的autoencoder

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
keras框架的数据输入维度问题

x = np.arange(20) 创建一个一维数组shape是(20,),在keras里,如果直接输入神经网络的话,那么输入神经元是20吧? 但是如果x= x.reshape((1, 20))或者x=x.reshape((20,1))就是把原有的一维数组看成一个输入,reshape后的值输入神经网络就是一个神经元吧?上述二者的reshape是不是输入是等价的?

Keras 图片要如何输入?

用Keras做CNN,请问图片要怎么输入进去。有没有mnist.load_data()的源码

keras model.fit函数报错,输入参数shape维度不正确,如何修正

使用函数 ``` model.fit(x=images, y=labels, validation_split=0.1, batch_size=batch_size, epochs=n_epochs, callbacks=callbacks, shuffle=True) ``` 由于我的训练集中image是灰色图片,所以images的shape为(2, 28, 28),导致报错Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (2, 28, 28) ,请问该如何处理

python keras sequential输入

python keras sequential 以Convolution1D作为第一层,输入的数据应该以怎样的形式? ![图片说明](https://img-ask.csdn.net/upload/201611/13/1479043537_386017.png) ![图片说明](https://img-ask.csdn.net/upload/201611/13/1479043555_758273.png) 刚开始接触,求老师能指点一下。

Ubuntu系统keras如何修改默认学习率

最近编程遇到关于学习率的问题,查找资料已知keras学习率默认值为0.01,想修改这个默认值,网络上说修改keras安装路径下optimizer.py文件即可,但是optimizer.py文件有好几个,不知修改哪一个?求高人指点迷津。

keras中如何对网络的某一层参数进行修改?

例如我使用model.get_layer('inp_layer').get_weights()[0] 获得了这一层的权重,我想手动修改这一层的参数值,如何通过一个赋值操作或者其他操作,把这一层的参数修改成我想要的值呢?

TensorFlow的Keras如何使用Dataset作为数据输入?

当我把dataset作为输入数据是总会报出如下错误,尽管我已经在数据解析那里reshape了图片大小为(512,512,1),请问该如何修改? ``` ValueError: Error when checking input: expected conv2d_input to have 4 dimensions, but got array with shape (None, 1) ``` **图片大小定义** ``` import tensorflow as tf from tensorflow import keras IMG_HEIGHT = 512 IMG_WIDTH = 512 IMG_CHANNELS = 1 IMG_PIXELS = IMG_CHANNELS * IMG_HEIGHT * IMG_WIDTH ``` **解析函数** ``` def parser(record): features = tf.parse_single_example(record, features={ 'image_raw': tf.FixedLenFeature([], tf.string), 'label': tf.FixedLenFeature([23], tf.int64) }) image = tf.decode_raw(features['image_raw'], tf.uint8) label = tf.cast(features['label'], tf.int32) image.set_shape([IMG_PIXELS]) image = tf.reshape(image, [IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS]) image = tf.cast(image, tf.float32) return image, label ``` **模型构建** ``` dataset = tf.data.TFRecordDataset([TFRECORD_PATH]) dataset.map(parser) dataset = dataset.repeat(10*10).batch(10) model = keras.Sequential([ keras.layers.Conv2D(filters=32, kernel_size=(5, 5), padding='same', activation='relu', input_shape=(512, 512, 1)), keras.layers.MaxPool2D(pool_size=(2, 2)), keras.layers.Dropout(0.25), keras.layers.Conv2D(filters=64, kernel_size=(3, 3), padding='same', activation='relu'), keras.layers.MaxPool2D(pool_size=(2, 2)), keras.layers.Dropout(0.25), keras.layers.Flatten(), keras.layers.Dense(128, activation='relu'), keras.layers.Dropout(0.25), keras.layers.Dense(23, activation='softmax') ]) model.compile(optimizer=keras.optimizers.Adam(), loss=keras.losses.sparse_categorical_crossentropy, metrics=[tf.keras.metrics.categorical_accuracy]) model.fit(dataset.make_one_shot_iterator(), epochs=10, steps_per_epoch=10) ```

keras pretrained模型应用于新的数据应当如何设置输入格式

网络模型为: ``` Model Summary: __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== img (InputLayer) (None, 427, 561, 3) 0 __________________________________________________________________________________________________ dmap (InputLayer) (None, 427, 561, 3) 0 __________________________________________________________________________________________________ padding1_1 (ZeroPadding2D) (None, 429, 563, 3) 0 img[0][0] __________________________________________________________________________________________________ padding1_1d (ZeroPadding2D) (None, 429, 563, 3) 0 dmap[0][0] __________________________________________________________________________________________________ conv1_1 (Conv2D) (None, 427, 561, 64) 1792 padding1_1[0][0] __________________________________________________________________________________________________ conv1_1d (Conv2D) (None, 427, 561, 64) 1792 padding1_1d[0][0] __________________________________________________________________________________________________ padding1_2 (ZeroPadding2D) (None, 429, 563, 64) 0 conv1_1[0][0] __________________________________________________________________________________________________ padding1_2d (ZeroPadding2D) (None, 429, 563, 64) 0 conv1_1d[0][0] __________________________________________________________________________________________________ conv1_2 (Conv2D) (None, 427, 561, 64) 36928 padding1_2[0][0] __________________________________________________________________________________________________ conv1_2d (Conv2D) (None, 427, 561, 64) 36928 padding1_2d[0][0] __________________________________________________________________________________________________ pool1 (MaxPooling2D) (None, 214, 281, 64) 0 conv1_2[0][0] __________________________________________________________________________________________________ pool1d (MaxPooling2D) (None, 214, 281, 64) 0 conv1_2d[0][0] __________________________________________________________________________________________________ padding2_1 (ZeroPadding2D) (None, 216, 283, 64) 0 pool1[0][0] __________________________________________________________________________________________________ padding2_1d (ZeroPadding2D) (None, 216, 283, 64) 0 pool1d[0][0] __________________________________________________________________________________________________ conv2_1 (Conv2D) (None, 214, 281, 128 73856 padding2_1[0][0] __________________________________________________________________________________________________ conv2_1d (Conv2D) (None, 214, 281, 128 73856 padding2_1d[0][0] __________________________________________________________________________________________________ padding2_2 (ZeroPadding2D) (None, 216, 283, 128 0 conv2_1[0][0] __________________________________________________________________________________________________ padding2_2d (ZeroPadding2D) (None, 216, 283, 128 0 conv2_1d[0][0] __________________________________________________________________________________________________ conv2_2 (Conv2D) (None, 214, 281, 128 147584 padding2_2[0][0] __________________________________________________________________________________________________ conv2_2d (Conv2D) (None, 214, 281, 128 147584 padding2_2d[0][0] __________________________________________________________________________________________________ pool2 (MaxPooling2D) (None, 107, 141, 128 0 conv2_2[0][0] __________________________________________________________________________________________________ pool2d (MaxPooling2D) (None, 107, 141, 128 0 conv2_2d[0][0] __________________________________________________________________________________________________ padding3_1 (ZeroPadding2D) (None, 109, 143, 128 0 pool2[0][0] __________________________________________________________________________________________________ padding3_1d (ZeroPadding2D) (None, 109, 143, 128 0 pool2d[0][0] __________________________________________________________________________________________________ conv3_1 (Conv2D) (None, 107, 141, 256 295168 padding3_1[0][0] __________________________________________________________________________________________________ conv3_1d (Conv2D) (None, 107, 141, 256 295168 padding3_1d[0][0] __________________________________________________________________________________________________ bn_conv3_1 (BatchNormalization) (None, 107, 141, 256 1024 conv3_1[0][0] __________________________________________________________________________________________________ bn_conv3_1d (BatchNormalization (None, 107, 141, 256 1024 conv3_1d[0][0] __________________________________________________________________________________________________ relu3_1 (Activation) (None, 107, 141, 256 0 bn_conv3_1[0][0] __________________________________________________________________________________________________ relu3_1d (Activation) (None, 107, 141, 256 0 bn_conv3_1d[0][0] __________________________________________________________________________________________________ padding3_2 (ZeroPadding2D) (None, 109, 143, 256 0 relu3_1[0][0] __________________________________________________________________________________________________ padding3_2d (ZeroPadding2D) (None, 109, 143, 256 0 relu3_1d[0][0] __________________________________________________________________________________________________ conv3_2 (Conv2D) (None, 107, 141, 256 590080 padding3_2[0][0] __________________________________________________________________________________________________ conv3_2d (Conv2D) (None, 107, 141, 256 590080 padding3_2d[0][0] __________________________________________________________________________________________________ bn_conv3_2 (BatchNormalization) (None, 107, 141, 256 1024 conv3_2[0][0] __________________________________________________________________________________________________ bn_conv3_2d (BatchNormalization (None, 107, 141, 256 1024 conv3_2d[0][0] __________________________________________________________________________________________________ relu3_2 (Activation) (None, 107, 141, 256 0 bn_conv3_2[0][0] __________________________________________________________________________________________________ relu3_2d (Activation) (None, 107, 141, 256 0 bn_conv3_2d[0][0] __________________________________________________________________________________________________ padding3_3 (ZeroPadding2D) (None, 109, 143, 256 0 relu3_2[0][0] __________________________________________________________________________________________________ padding3_3d (ZeroPadding2D) (None, 109, 143, 256 0 relu3_2d[0][0] __________________________________________________________________________________________________ conv3_3 (Conv2D) (None, 107, 141, 256 590080 padding3_3[0][0] __________________________________________________________________________________________________ conv3_3d (Conv2D) (None, 107, 141, 256 590080 padding3_3d[0][0] __________________________________________________________________________________________________ bn_conv3_3 (BatchNormalization) (None, 107, 141, 256 1024 conv3_3[0][0] __________________________________________________________________________________________________ bn_conv3_3d (BatchNormalization (None, 107, 141, 256 1024 conv3_3d[0][0] __________________________________________________________________________________________________ relu3_3 (Activation) (None, 107, 141, 256 0 bn_conv3_3[0][0] __________________________________________________________________________________________________ relu3_3d (Activation) (None, 107, 141, 256 0 bn_conv3_3d[0][0] __________________________________________________________________________________________________ pool3 (MaxPooling2D) (None, 54, 71, 256) 0 relu3_3[0][0] __________________________________________________________________________________________________ pool3d (MaxPooling2D) (None, 54, 71, 256) 0 relu3_3d[0][0] __________________________________________________________________________________________________ padding4_1 (ZeroPadding2D) (None, 56, 73, 256) 0 pool3[0][0] __________________________________________________________________________________________________ padding4_1d (ZeroPadding2D) (None, 56, 73, 256) 0 pool3d[0][0] __________________________________________________________________________________________________ conv4_1 (Conv2D) (None, 54, 71, 512) 1180160 padding4_1[0][0] __________________________________________________________________________________________________ conv4_1d (Conv2D) (None, 54, 71, 512) 1180160 padding4_1d[0][0] __________________________________________________________________________________________________ bn_conv4_1 (BatchNormalization) (None, 54, 71, 512) 2048 conv4_1[0][0] __________________________________________________________________________________________________ bn_conv4_1d (BatchNormalization (None, 54, 71, 512) 2048 conv4_1d[0][0] __________________________________________________________________________________________________ relu4_1 (Activation) (None, 54, 71, 512) 0 bn_conv4_1[0][0] __________________________________________________________________________________________________ relu4_1d (Activation) (None, 54, 71, 512) 0 bn_conv4_1d[0][0] __________________________________________________________________________________________________ padding4_2 (ZeroPadding2D) (None, 56, 73, 512) 0 relu4_1[0][0] __________________________________________________________________________________________________ padding4_2d (ZeroPadding2D) (None, 56, 73, 512) 0 relu4_1d[0][0] __________________________________________________________________________________________________ conv4_2 (Conv2D) (None, 54, 71, 512) 2359808 padding4_2[0][0] __________________________________________________________________________________________________ conv4_2d (Conv2D) (None, 54, 71, 512) 2359808 padding4_2d[0][0] __________________________________________________________________________________________________ bn_conv4_2 (BatchNormalization) (None, 54, 71, 512) 2048 conv4_2[0][0] __________________________________________________________________________________________________ bn_conv4_2d (BatchNormalization (None, 54, 71, 512) 2048 conv4_2d[0][0] __________________________________________________________________________________________________ relu4_2 (Activation) (None, 54, 71, 512) 0 bn_conv4_2[0][0] __________________________________________________________________________________________________ relu4_2d (Activation) (None, 54, 71, 512) 0 bn_conv4_2d[0][0] __________________________________________________________________________________________________ padding4_3 (ZeroPadding2D) (None, 56, 73, 512) 0 relu4_2[0][0] __________________________________________________________________________________________________ padding4_3d (ZeroPadding2D) (None, 56, 73, 512) 0 relu4_2d[0][0] __________________________________________________________________________________________________ conv4_3 (Conv2D) (None, 54, 71, 512) 2359808 padding4_3[0][0] __________________________________________________________________________________________________ conv4_3d (Conv2D) (None, 54, 71, 512) 2359808 padding4_3d[0][0] __________________________________________________________________________________________________ bn_conv4_3 (BatchNormalization) (None, 54, 71, 512) 2048 conv4_3[0][0] __________________________________________________________________________________________________ bn_conv4_3d (BatchNormalization (None, 54, 71, 512) 2048 conv4_3d[0][0] __________________________________________________________________________________________________ relu4_3 (Activation) (None, 54, 71, 512) 0 bn_conv4_3[0][0] __________________________________________________________________________________________________ relu4_3d (Activation) (None, 54, 71, 512) 0 bn_conv4_3d[0][0] __________________________________________________________________________________________________ pool4 (MaxPooling2D) (None, 27, 36, 512) 0 relu4_3[0][0] __________________________________________________________________________________________________ pool4d (MaxPooling2D) (None, 27, 36, 512) 0 relu4_3d[0][0] __________________________________________________________________________________________________ padding5_1 (ZeroPadding2D) (None, 29, 38, 512) 0 pool4[0][0] __________________________________________________________________________________________________ padding5_1d (ZeroPadding2D) (None, 29, 38, 512) 0 pool4d[0][0] __________________________________________________________________________________________________ conv5_1 (Conv2D) (None, 27, 36, 512) 2359808 padding5_1[0][0] __________________________________________________________________________________________________ conv5_1d (Conv2D) (None, 27, 36, 512) 2359808 padding5_1d[0][0] __________________________________________________________________________________________________ bn_conv5_1 (BatchNormalization) (None, 27, 36, 512) 2048 conv5_1[0][0] __________________________________________________________________________________________________ bn_conv5_1d (BatchNormalization (None, 27, 36, 512) 2048 conv5_1d[0][0] __________________________________________________________________________________________________ relu5_1 (Activation) (None, 27, 36, 512) 0 bn_conv5_1[0][0] __________________________________________________________________________________________________ relu5_1d (Activation) (None, 27, 36, 512) 0 bn_conv5_1d[0][0] __________________________________________________________________________________________________ padding5_2 (ZeroPadding2D) (None, 29, 38, 512) 0 relu5_1[0][0] __________________________________________________________________________________________________ padding5_2d (ZeroPadding2D) (None, 29, 38, 512) 0 relu5_1d[0][0] __________________________________________________________________________________________________ conv5_2 (Conv2D) (None, 27, 36, 512) 2359808 padding5_2[0][0] __________________________________________________________________________________________________ conv5_2d (Conv2D) (None, 27, 36, 512) 2359808 padding5_2d[0][0] __________________________________________________________________________________________________ bn_conv5_2 (BatchNormalization) (None, 27, 36, 512) 2048 conv5_2[0][0] __________________________________________________________________________________________________ bn_conv5_2d (BatchNormalization (None, 27, 36, 512) 2048 conv5_2d[0][0] __________________________________________________________________________________________________ relu5_2 (Activation) (None, 27, 36, 512) 0 bn_conv5_2[0][0] __________________________________________________________________________________________________ relu5_2d (Activation) (None, 27, 36, 512) 0 bn_conv5_2d[0][0] __________________________________________________________________________________________________ padding5_3 (ZeroPadding2D) (None, 29, 38, 512) 0 relu5_2[0][0] __________________________________________________________________________________________________ padding5_3d (ZeroPadding2D) (None, 29, 38, 512) 0 relu5_2d[0][0] __________________________________________________________________________________________________ conv5_3 (Conv2D) (None, 27, 36, 512) 2359808 padding5_3[0][0] __________________________________________________________________________________________________ conv5_3d (Conv2D) (None, 27, 36, 512) 2359808 padding5_3d[0][0] __________________________________________________________________________________________________ bn_conv5_3 (BatchNormalization) (None, 27, 36, 512) 2048 conv5_3[0][0] __________________________________________________________________________________________________ bn_conv5_3d (BatchNormalization (None, 27, 36, 512) 2048 conv5_3d[0][0] __________________________________________________________________________________________________ relu5_3 (Activation) (None, 27, 36, 512) 0 bn_conv5_3[0][0] __________________________________________________________________________________________________ rois (InputLayer) (None, 5) 0 __________________________________________________________________________________________________ relu5_3d (Activation) (None, 27, 36, 512) 0 bn_conv5_3d[0][0] __________________________________________________________________________________________________ rois_context (InputLayer) (None, 5) 0 __________________________________________________________________________________________________ pool5 (RoiPoolingConvSingle) (None, 7, 7, 512) 0 relu5_3[0][0] rois[0][0] __________________________________________________________________________________________________ pool5d (RoiPoolingConvSingle) (None, 7, 7, 512) 0 relu5_3d[0][0] rois[0][0] __________________________________________________________________________________________________ pool5_context (RoiPoolingConvSi (None, 7, 7, 512) 0 relu5_3[0][0] rois_context[0][0] __________________________________________________________________________________________________ pool5d_context (RoiPoolingConvS (None, 7, 7, 512) 0 relu5_3d[0][0] rois_context[0][0] __________________________________________________________________________________________________ flatten (Flatten) (None, 25088) 0 pool5[0][0] __________________________________________________________________________________________________ flatten_d (Flatten) (None, 25088) 0 pool5d[0][0] __________________________________________________________________________________________________ flatten_context (Flatten) (None, 25088) 0 pool5_context[0][0] __________________________________________________________________________________________________ flatten_d_context (Flatten) (None, 25088) 0 pool5d_context[0][0] __________________________________________________________________________________________________ concat (Concatenate) (None, 100352) 0 flatten[0][0] flatten_d[0][0] flatten_context[0][0] flatten_d_context[0][0] __________________________________________________________________________________________________ fc6 (Dense) (None, 4096) 411045888 concat[0][0] __________________________________________________________________________________________________ drop6 (Dropout) (None, 4096) 0 fc6[0][0] __________________________________________________________________________________________________ fc7 (Dense) (None, 4096) 16781312 drop6[0][0] __________________________________________________________________________________________________ drop7 (Dropout) (None, 4096) 0 fc7[0][0] __________________________________________________________________________________________________ cls_score (Dense) (None, 20) 81940 drop7[0][0] __________________________________________________________________________________________________ bbox_pred_3d (Dense) (None, 140) 573580 drop7[0][0] ================================================================================================== Total params: 457,942,816 Trainable params: 457,927,456 Non-trainable params: 15,360 ``` 模型定义的输入输出为: ``` tf_model = Model( inputs=[img, dmap, rois, rois_context], outputs=[cls_score, bbox_pred_3d] ) ``` 我在predict时采用的格式为: ``` roi2d = twod_Proposal('test1_rgb.jpg', 'q') #获得roi proposal print('------------------------------------------') # for roi in roi2d: # print(roi) # print('------------------------------------------') #TODO: Set input to the model img = cv2.imread('test1_rgb.jpg') dmap = cv2.imread('test1_depth.jpg') roi2d_context = get_roi_context(roi2d) tf_model = make_deng_tf_test() show_model_info(tf_model) tf_model.compile(loss='mean_squared_error', optimizer='adam', metrics=['acc']) [score, result_predict] = tf_model.predict([img, dmap, roi2d, roi2d_context]) ``` 报错信息为 ``` Using TensorFlow backend. Total Number of Region Proposals: 5012 ------------------------------------------ WARNING:tensorflow:From /Users/anaconda2/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. 2019-07-01 09:51:26.089446: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2019-07-01 09:51:26.090541: I tensorflow/core/common_runtime/process_util.cc:71] Creating new thread pool with default inter op setting: 4. Tune using inter_op_parallelism_threads for best performance. WARNING:tensorflow:From /Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. Traceback (most recent call last): File "/Users/Documents/PycharmProjects/Amodal3Det_TF/tfmodel/model.py", line 375, in <module> [score, result_predict] = tf_model.predict([img, dmap, roi2d, roi2d_context]) File "/Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/engine/training.py", line 1149, in predict x, _, _ = self._standardize_user_data(x) File "/Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/engine/training.py", line 751, in _standardize_user_data exception_prefix='input') File "/Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/engine/training_utils.py", line 92, in standardize_input_data data = [standardize_single_array(x) for x in data] File "/Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/engine/training_utils.py", line 92, in <listcomp> data = [standardize_single_array(x) for x in data] File "/Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/engine/training_utils.py", line 27, in standardize_single_array elif x.ndim == 1: AttributeError: 'list' object has no attribute 'ndim' Process finished with exit code 1 ``` 请问各位问题大概出在哪里?

keras下用RNN中的lstm来进行图片分类,输入维数的错误

1.如题,我是在keras下用lstm来对本地文件夹中六类垃圾进行图片分类 这是我的部分代码: (我本地的图片是512 ✖384的,进行resize为200✖160了) ``` nb_lstm_outputs = 128 #神经元个数 nb_time_steps = 200 #时间序列长度 nb_input_vector = 160 #输入序列 # 读取数据和标签 print("------开始读取数据------") data = [] labels = [] # 拿到图像数据路径,方便后续读取 imagePaths = sorted(list(utils_paths.list_images('./dataset-resized'))) random.seed(42) random.shuffle(imagePaths) # 遍历读取数据 for imagePath in imagePaths: # 读取图像数据 image = cv2.imread(imagePath) image = cv2.resize(image, (160,200)) data.append(image) # 读取标签 label = imagePath.split(os.path.sep)[-2] labels.append(label) # 对图像数据做scale操作 data=np.array(data, dtype="float") / 255.0 labels = np.array(labels) # 数据集切分 (trainX, testX, trainY, testY) = train_test_split(data,labels, test_size=0.25, random_state=42) # 转换标签为one-hot encoding格式 lb = LabelBinarizer() trainY = lb.fit_transform(trainY) testY = lb.transform(testY) # 设置初始化超参数 EPOCHS = 5 BS = 71 ``` 以上就是我的数据预处理操作 下面是我构建的模型: ``` model = Sequential() model.add(LSTM(units=nb_lstm_outputs, return_sequences=True, input_shape=(nb_time_steps, nb_input_vector))) # returns a sequence of vectors of dimension 30 model.add(LSTM(units=nb_lstm_outputs, return_sequences=True)) # returns a sequence of vectors of dimension 30 model.add(LSTM(units=nb_lstm_outputs)) # return a single vector of dimension 30 model.add(Dense(1, activation='softmax')) model.add(Dense(6, activation='softmax')) adam=Adam(lr=1e-4) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(trainX, trainY, epochs = EPOCHS, batch_size = BS, verbose = 1, validation_data = (testX,testY)) ``` 后续就是优化和生成loss等的代码了。 然而运行时遇到了以下维度错误: ![图片说明](https://img-ask.csdn.net/upload/202004/26/1587884348_141131.png) 然后我我试着修改不同的尺寸,发现都有上述错误,感觉应该是维度错误,但是不太明白1895是怎么来的? 2.遇到上述维度问题后,不太清楚怎么解决,于是我将代码中读取图片cv2.imread,将图像进行了灰度化: ``` image = cv2.imread(imagePath,CV2.IMREAD_GRAYSCALE) ``` 调整后,代码可以运行,然而并未按照预先设定的Batchsize进行训练,而是直接以划分的整体比例进行训练,想请问下这是怎么回事?已经输入BS到batch_size的参数了 ![图片说明](https://img-ask.csdn.net/upload/202004/26/1587884791_796238.png) 所以想请问各位大神,怎么解决维度问题,还有就是为什么后面BS传进去不管用啊,有没有清楚怎么一回事的啊? 谢谢各位大神了!!是个小白QAQ谢谢!

在通过keras官网给出的提取中间层方法使用中出现bao'c

在使用keras的中间层数据提取方法中,出现以下报错 object of type 'Conv2D' has no len() 这是我的网络结构 我像提取的是最后一层化合池的数据 ``` model.add(tf.keras.layers.Conv2D(64, (3, 3), padding='same', strides=2, input_shape=(image_size, image_size, 3) , activation='relu')) model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='same')) model.add(tf.keras.layers.Conv2D(128, (3, 3), padding='same', strides=2, activation='relu')) model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='same')) model.add(tf.keras.layers.Conv2D(256, (3, 3), padding='same', strides=2, activation='relu')) model.add(tf.keras.layers.Conv2D(256, (3, 3), padding='same', strides=2, activation='relu')) model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='same')) model.add(tf.keras.layers.Conv2D(512, (3, 3), padding='same', strides=2, activation='relu')) model.add(tf.keras.layers.Conv2D(512, (3, 3), padding='same', strides=2, activation='relu')) model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='same')) model.add(tf.keras.layers.Conv2D(512, (3, 3), padding='same', strides=2, activation='relu')) model.add(tf.keras.layers.Conv2D(512, (3, 3), padding='same', strides=2, activation='relu')) model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=2, padding='same',name = "out")) ``` 这是我使用的提取方法 ``` layer_name='out' intermediate_layer_model=Model(input=vgg.input,output=vgg.get_layer(layer_name).output) intermediate_output=intermediate_layer_model.predict(train_data) ``` 在这里的coven2D换成MaxPooling2D也会出现同样的提示 请问是哪个部分有问题 或者说keras的coven2d和不能单独输出shu'ju

如何在keras+tensorflow中对4通道图像如何输入并分类呢?

ImageDataGenerator默认的flow_from_directory函数中有个color_mode设置,我看文献中只支持‘gray'和'rgb',但我现在要处理的图像是RGBD的4通道图像,如何设置呢?求大师指点。 我尝试着将color_mode设置为'rgb',但是在第一层卷积层的输入数据类型,设置的是(width,height,4)的四通道格式,运行的时候出错了,提示如果我的color_mode设置成了‘rgb',那么自动生成batch的时候,依旧是会变为3通道格式。具体如下: 在flow_from_directory中的color为‘rgb' ``` train_generator = train_datagen.flow_from_directory( directory= train_dir, # this is the target directory target_size=(200, 200), # all images will be resized to 200x200 classes= potato_class, batch_size=60, color_mode= 'rgb', class_mode='sparse') ``` 在卷基层的输入input_shape中设置为4通道 ``` model = Sequential() # CNN构建 model.add(Convolution2D( input_shape=(200, 200, 4), # input_shape=(1, Width, Height), filters=16, kernel_size=3, strides=1, padding='same', data_format='channels_last', name='CONV_1' )) ``` 运行后的错误提示如下: ValueError: Error when checking input: expected CONV_1_input to have shape (None, 200, 200, 4) but got array with shape (60, 200, 200, 3) 怎样才能让keras接受4通道图像呢?我在stackOverflow中看到有人留言说4通道是支持的,但是我没有找到代码。

keras resnet应用时不论输入什么图片输出是固定的

用下面这样的代码测试的时候result都是固定值 之前都是用model.add这样来写结构的 不知道是不是结构写法的问题,model.add这样就没问题 ``` x = load_img(file, target_size=(img_width,img_height)) x = img_to_array(x) x = np.expand_dims(x, axis=0) array = model.predict(x) result = array[0] ``` training.py: ``` # coding=utf-8 from keras.models import Model from keras.layers import Input, Dense, Dropout, BatchNormalization, Conv2D, MaxPooling2D, AveragePooling2D, concatenate, \ Activation, ZeroPadding2D from keras.layers import add, Flatten from keras.utils import plot_model from keras.metrics import top_k_categorical_accuracy from keras.preprocessing.image import ImageDataGenerator from keras.models import load_model from keras import optimizers import os import sys import tensorflow as tf from keras import callbacks config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True)) sess = tf.Session(config=config) DEV = False argvs = sys.argv argc = len(argvs) if argc > 1 and (argvs[1] == "--development" or argvs[1] == "-d"): DEV = True if DEV: EPOCH=4 else: EPOCH=1 # Global Constants samples_per_epoch = 3750 validation_steps = 490 NB_CLASS=5 IM_WIDTH=100 IM_HEIGHT=100 train_root='data/train' vaildation_root='data/test' batch_size=16 lr=0.0004 # train data train_datagen = ImageDataGenerator( width_shift_range=0.1, height_shift_range=0.1, shear_range=0.1, zoom_range=0.1, horizontal_flip=True, rescale=1./255 ) train_generator = train_datagen.flow_from_directory( train_root, target_size=(IM_WIDTH, IM_HEIGHT), batch_size=batch_size, shuffle=True ) # vaild data vaild_datagen = ImageDataGenerator( width_shift_range=0.1, height_shift_range=0.1, shear_range=0.1, zoom_range=0.1, horizontal_flip=True, rescale=1./255 ) vaild_generator = train_datagen.flow_from_directory( vaildation_root, target_size=(IM_WIDTH, IM_HEIGHT), batch_size=batch_size, ) def Conv2d_BN(x, nb_filter, kernel_size, strides=(1, 1), padding='same', name=None): if name is not None: bn_name = name + '_bn' conv_name = name + '_conv' else: bn_name = None conv_name = None x = Conv2D(nb_filter, kernel_size, padding=padding, strides=strides, activation='relu', name=conv_name)(x) x = BatchNormalization(axis=3, name=bn_name)(x) return x def identity_Block(inpt, nb_filter, kernel_size, strides=(1, 1), with_conv_shortcut=False): x = Conv2d_BN(inpt, nb_filter=nb_filter, kernel_size=kernel_size, strides=strides, padding='same') x = Conv2d_BN(x, nb_filter=nb_filter, kernel_size=kernel_size, padding='same') if with_conv_shortcut:#shortcut的含义是:将输入层x与最后的输出层y进行连接,如上图所示 shortcut = Conv2d_BN(inpt, nb_filter=nb_filter, strides=strides, kernel_size=kernel_size) x = add([x, shortcut]) return x else: x = add([x, inpt]) return x def resnet_34(width,height,channel,classes): inpt = Input(shape=(width, height, channel)) x = ZeroPadding2D((3, 3))(inpt) #conv1 x = Conv2d_BN(x, nb_filter=64, kernel_size=(7, 7), strides=(2, 2), padding='valid') x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='same')(x) #conv2_x x = identity_Block(x, nb_filter=64, kernel_size=(3, 3)) x = identity_Block(x, nb_filter=64, kernel_size=(3, 3)) x = identity_Block(x, nb_filter=64, kernel_size=(3, 3)) #conv3_x x = identity_Block(x, nb_filter=128, kernel_size=(3, 3), strides=(2, 2), with_conv_shortcut=True) x = identity_Block(x, nb_filter=128, kernel_size=(3, 3)) x = identity_Block(x, nb_filter=128, kernel_size=(3, 3)) x = identity_Block(x, nb_filter=128, kernel_size=(3, 3)) #conv4_x x = identity_Block(x, nb_filter=256, kernel_size=(3, 3), strides=(2, 2), with_conv_shortcut=True) x = identity_Block(x, nb_filter=256, kernel_size=(3, 3)) x = identity_Block(x, nb_filter=256, kernel_size=(3, 3)) x = identity_Block(x, nb_filter=256, kernel_size=(3, 3)) x = identity_Block(x, nb_filter=256, kernel_size=(3, 3)) x = identity_Block(x, nb_filter=256, kernel_size=(3, 3)) #conv5_x x = identity_Block(x, nb_filter=512, kernel_size=(3, 3), strides=(2, 2), with_conv_shortcut=True) x = identity_Block(x, nb_filter=512, kernel_size=(3, 3)) x = identity_Block(x, nb_filter=512, kernel_size=(3, 3)) x = AveragePooling2D(pool_size=(4, 4))(x) x = Flatten()(x) x = Dense(classes, activation='softmax')(x) model = Model(inputs=inpt, outputs=x) return model if __name__ == '__main__': if (os.path.exists('modelresnet') and DEV): model = load_model('./modelresnet/resnet_50.h5')######## model.load_weights('./modelresnet/weights.h5') else: model = resnet_34(IM_WIDTH,IM_HEIGHT,3,NB_CLASS) model.compile(loss='categorical_crossentropy', optimizer=optimizers.RMSprop(lr=lr), metrics=['accuracy']) print ('Model Compiled') model.fit_generator( train_generator, samples_per_epoch=samples_per_epoch, epochs=EPOCH, validation_data=vaild_generator, validation_steps=validation_steps) target_dir = './modelresnet/' if not os.path.exists(target_dir): os.mkdir(target_dir) model.save('./modelresnet/resnet_50.h5') model.save_weights('./modelresnet/weights.h5') #loss,acc,top_acc=model.evaluate_generator(test_generator, steps=test_generator.n / batch_size) #print 'Test result:loss:%f,acc:%f,top_acc:%f' % (loss, acc, top_acc) ```

用keras加载CNN模型,输入图片提示numpy array有错误

![这是我的代码](https://img-ask.csdn.net/upload/201702/16/1487248734_973992.png)![这是错误提示](https://img-ask.csdn.net/upload/201702/16/1487248772_188799.png) 我输入的图片实灰度图片啊,但是好像错误提示我输入的是彩色图片,求大神解答,谢谢

keras 提示出错 初学者 不明白为什么

from keras.layers import Input, Dense, merge from keras.models import Model from keras import backend as K a = Input(shape=(2,), name='a') b = Input(shape=(2,), name='b') a_rotated = Dense(2, activation='linear')(a) def cosine(x): axis = len(x[0]._keras_shape)-1 dot = lambda a, b: K.batch_dot(a, b, axes=axis) return dot(x[0], x[1]) / K.sqrt(dot(x[0], x[0]) * dot(x[1], x[1])) cosine_sim = merge([a_rotated, b], mode=cosine, output_shape=lambda x: x[:-1]) model = Model(input=[a, b], output=[cosine_sim]) model.compile(optimizer='sgd', loss='mse') import numpy as np a_data = np.asarray([[0, 1], [1, 0], [0, -1], [-1, 0]]) b_data = np.asarray([[1, 0], [0, -1], [-1, 0], [0, 1]]) targets = np.asarray([1, 1, 1, 1]) model.fit([a_data, b_data], [targets], nb_epoch=1000) print(model.layers[2].W.get_value()) 这段代码有问题

Tensorflow代码转到Keras

我现在有Tensortflow的代码和结构图如下,这是AC-GAN中生成器的部分,我用原生tf是可以跑通的,但当我想转到Keras中实现却很头疼。 ``` def batch_norm(inputs, is_training=is_training, decay=0.9): return tf.contrib.layers.batch_norm(inputs, is_training=is_training, decay=decay) # 构建残差块 def g_block(inputs): h0 = tf.nn.relu(batch_norm(conv2d(inputs, 3, 64, 1, use_bias=False))) h0 = batch_norm(conv2d(h0, 3, 64, 1, use_bias=False)) h0 = tf.add(h0, inputs) return h0 # 生成器 # batch_size = 32 # z : shape(32, 128) # label : shape(32, 34) def generator(z, label): with tf.variable_scope('generator', reuse=None): d = 16 z = tf.concat([z, label], axis=1) h0 = tf.layers.dense(z, units=d * d * 64) h0 = tf.reshape(h0, shape=[-1, d, d, 64]) h0 = tf.nn.relu(batch_norm(h0)) shortcut = h0 for i in range(16): h0 = g_block(h0) h0 = tf.nn.relu(batch_norm(h0)) h0 = tf.add(h0, shortcut) for i in range(3): h0 = conv2d(h0, 3, 256, 1, use_bias=False) h0 = tf.depth_to_space(h0, 2) h0 = tf.nn.relu(batch_norm(h0)) h0 = tf.layers.conv2d(h0, kernel_size=9, filters=3, strides=1, padding='same', activation=tf.nn.tanh, name='g', use_bias=True) return h0 ``` ![生成器结构图](https://img-ask.csdn.net/upload/201910/29/1572278934_997142.png) 在Keras中都是先构建Model,在Model中不断的加层 但上面的代码却是中间包含着新旧数据的计算,比如 ``` .... shortcut = h0 .... h0 = tf.add(h0, shortcut) ``` 难不成我还要构建另外一个model作为中间输出吗? 大佬们帮帮忙解释下,或者能不能给出翻译到Keras中应该怎么写

LSTM输入数据格式问题

输入样本train_x1 标签train_y1 样本与标签都是(20000,10) 然后我reshape变成三维(20000,1,10)报错 求该如何修改格式 输入十个数值,输出十个数值。 train_x1 = np.reshape(train_x1, (train_x1.shape[0],1,train_x1.shape[1])) train_y1 = np.reshape(train_y1, (train_y1.shape[0],1,train_y1.shape[1])) model = Sequential() model.add(LSTM(50, input_shape=(train_x1.shape[1], train_x1.shape[2]))) model.add(Dense(10)) model.compile(loss='mse', optimizer='adam') model.fit(train_x1, train_y1, nb_epoch = 300, batch_size = 10) model.save_weights('LSTM.model')

keras.util.sequence + fit_generator 如何实现多输出model

输入输出的形式是下面这样: ``` model = Model(inputs=input_img, outputs=[mask,net2_opt,net3_opt]) ``` 由于sequence要求一定要返回一个两个参数的远足,所以生成器的_getitem_的实现如下: ``` class DataGenerator(keras.utils.Sequence): def __getitem__(self, index): #生成每个batch数据,这里就根据自己对数据的读取方式进行发挥了 # 生成batch_size个索引 batch_indexs = self.indexes[index*self.batch_size:(index+1)*self.batch_size] # 根据索引获取datas集合中的数据 batch_datas = [self.datas[k] for k in batch_indexs] # 生成数据 images, masks,heatmaps,xyzs = self.data_generation(batch_datas) return (images, [masks,heatmaps,xyzs]) ``` output中的mask并不能与getitem的返回值匹配。 会报错: ValueError: Error when checking target: expected conv_1x1_x14 to have 4 dimensions, but got array with shape (3,1) 请问,是不是keras.util.sequence不能实现多输出问题?

基于keras写的模型中自定义的函数(如损失函数)如何保存到模型中?

```python batch_size = 128 original_dim = 100 #25*4 latent_dim = 16 # z的维度 intermediate_dim = 256 # 中间层的维度 nb_epoch = 50 # 训练轮数 epsilon_std = 1.0 # 重参数 #my tips:encoding x = Input(batch_shape=(batch_size,original_dim)) h = Dense(intermediate_dim, activation='relu')(x) z_mean = Dense(latent_dim)(h) # mu z_log_var = Dense(latent_dim)(h) # sigma #my tips:Gauss sampling,sample Z def sampling(args): # 重采样 z_mean, z_log_var = args epsilon = K.random_normal(shape=(128, 16), mean=0., stddev=1.0) return z_mean + K.exp(z_log_var / 2) * epsilon # note that "output_shape" isn't necessary with the TensorFlow backend # my tips:get sample z(encoded) z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var]) # we instantiate these layers separately so as to reuse them later decoder_h = Dense(intermediate_dim, activation='relu') # 中间层 decoder_mean = Dense(original_dim, activation='sigmoid') # 输出层 h_decoded = decoder_h(z) x_decoded_mean = decoder_mean(h_decoded) #my tips:loss(restruct X)+KL def vae_loss(x, x_decoded_mean): xent_loss = original_dim * objectives.binary_crossentropy(x, x_decoded_mean) kl_loss = - 0.5 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1) return xent_loss + kl_loss vae = Model(x, x_decoded_mean) vae.compile(optimizer='rmsprop', loss=vae_loss) vae.fit(x_train, x_train, shuffle=True, epochs=nb_epoch, verbose=2, batch_size=batch_size, validation_data=(x_valid, x_valid)) vae.save(path+'//VAE.h5') ``` 一段搭建VAE结构的代码,在保存模型后调用时先是出现了sampling中一些全局变量未定义的问题,将变量改为确定数字后又出现了vae_loss函数未定义的问题(unknown loss function: vae_loss) 个人认为是模型中自定义的函数在保存上出现问题,但是也不知道怎么解决。刚刚上手keras和tensorflow这些框架,很多问题是第一次遇到,麻烦大神们帮帮忙!感谢!

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

上班一个月,后悔当初着急入职的选择了

最近有个老铁,告诉我说,上班一个月,后悔当初着急入职现在公司了。他之前在美图做手机研发,今年美图那边今年也有一波组织优化调整,他是其中一个,在协商离职后,当时捉急找工作上班,因为有房贷供着,不能没有收入来源。所以匆忙选了一家公司,实际上是一个大型外包公司,主要派遣给其他手机厂商做外包项目。**当时承诺待遇还不错,所以就立马入职去上班了。但是后面入职后,发现薪酬待遇这块并不是HR所说那样,那个HR自...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

外包程序员的幸福生活

今天给你们讲述一个外包程序员的幸福生活。男主是Z哥,不是在外包公司上班的那种,是一名自由职业者,接外包项目自己干。接下来讲的都是真人真事。 先给大家介绍一下男主,Z哥,老程序员,是我十多年前的老同事,技术大牛,当过CTO,也创过业。因为我俩都爱好喝酒、踢球,再加上住的距离不算远,所以一直也断断续续的联系着,我对Z哥的状况也有大概了解。 Z哥几年前创业失败,后来他开始干起了外包,利用自己的技术能...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

深入剖析Springboot启动原理的底层源码,再也不怕面试官问了!

大家现在应该都对Springboot很熟悉,但是你对他的启动原理了解吗?

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发(16k)

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

你期望月薪4万,出门右拐,不送,这几个点,你也就是个初级的水平

先来看几个问题通过注解的方式注入依赖对象,介绍一下你知道的几种方式@Autowired和@Resource有何区别说一下@Autowired查找候选者的...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《Oracle Java SE编程自学与面试指南》最佳学习路线图2020年最新版(进大厂必备)

正确选择比瞎努力更重要!

《Oracle Java SE编程自学与面试指南》最佳学习路线图(2020最新版)

正确选择比瞎努力更重要!

字节跳动面试官竟然问了我JDBC?

轻松等回家通知

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

终于,月薪过5万了!

来看几个问题想不想月薪超过5万?想不想进入公司架构组?想不想成为项目组的负责人?想不想成为spring的高手,超越99%的对手?那么本文内容是你必须要掌握的。本文主要详解bean的生命...

自从喜欢上了B站这12个UP主,我越来越觉得自己是个废柴了!

不怕告诉你,我自从喜欢上了这12个UP主,哔哩哔哩成为了我手机上最耗电的软件,几乎每天都会看,可是吧,看的越多,我就越觉得自己是个废柴,唉,老天不公啊,不信你看看…… 间接性踌躇满志,持续性混吃等死,都是因为你们……但是,自己的学习力在慢慢变强,这是不容忽视的,推荐给你们! 都说B站是个宝,可是有人不会挖啊,没事,今天咱挖好的送你一箩筐,首先啊,我在B站上最喜欢看这个家伙的视频了,为啥 ,咱撇...

代码注释如此沙雕,会玩还是你们程序员!

某站后端代码被“开源”,同时刷遍全网的,还有代码里的那些神注释。 我们这才知道,原来程序员个个都是段子手;这么多年来,我们也走过了他们的无数套路… 首先,产品经理,是永远永远吐槽不完的!网友的评论也非常扎心,说看这些代码就像在阅读程序员的日记,每一页都写满了对产品经理的恨。 然后,也要发出直击灵魂的质问:你是尊贵的付费大会员吗? 这不禁让人想起之前某音乐app的穷逼Vip,果然,穷逼在哪里都是...

立即提问
相关内容推荐