跑keras模型,设置CPU使用上限报错是怎么回事

跑个keras算法库的模型,为防止电脑被占用过多资源,我设置了CPU的使用内核数,具体代码如下:

import tensorflow as tf
import keras.backend.tensorflow_backend as KTF

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' #忽略CPU编译不支持警告
config = tf.ConfigProto()
config.device_count = {'CPU': 4} #该句代码注释掉,才不会报错
config.intra_op_parallelism_threads = 4
config.inter_op_parallelism_threads = 4
config.allow_soft_placement = True
config.log_device_placement = True #不打印设备分配日志
sess = tf.Session(config=config)
KTF.set_session(sess)

运行发现报错:Assignment not allowed to repeated field "device_count" in protocol message object。。
很是让人奇怪~~~
这个该怎么解决呢?

2个回答

从简书上请教了别人,应该是设置为config.device_count ['CPU']= 4,就发现没有报错。。

lmw0320
lmw0320 这篇博文,我看过了。。下面还有我提的问题。。它没法解决我的疑问啊
9 个月之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
用pip方式安装keras报错

我是这样按教程这样做的,先activate tensorflow,然后pip install keras,报错更新完pip后还是有一大堆红色的报错,感觉好多文件出错了,该怎么办?救救孩子吧!![图片说明](https://img-ask.csdn.net/upload/201904/21/1555841668_663068.png)![图片说明](https://img-ask.csdn.net/upload/201904/21/1555841707_360847.png)

关于Colab上Keras模型转TPU模型的问题

使用TPU加速训练,将Keras模型转TPU模型时报错,如图![图片说明](https://img-ask.csdn.net/upload/202001/14/1578998736_238721.png) 关键代码如下 引用库: ``` %tensorflow_version 1.x import json import os import numpy as np import tensorflow as tf from tensorflow.python.keras.applications import resnet from tensorflow.python.keras import callbacks from tensorflow.python.keras.preprocessing.image import ImageDataGenerator import matplotlib.pyplot as plt ``` 转换TPU模型代码如下 ``` # This address identifies the TPU we'll use when configuring TensorFlow. TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR'] tf.logging.set_verbosity(tf.logging.INFO) self.model = tf.contrib.tpu.keras_to_tpu_model(self.model, strategy=tf.contrib.tpu.TPUDistributionStrategy(tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER))) self.model = resnet50.ResNet50(weights=None, input_shape=dataset.input_shape, classes=num_classes) ```

spyder import TensorFlow 或者 keras时不报错,程序终止。

spyder import TensorFlow 或者 keras时不报错,程序终止。 后面所有结果都没有出来,求助!!如何解决!!!

保存keras模型时出现的问题

求助各路大神,小弟最近用keras跑神经网络模型,在训练和测试时都很好没问题,但是在保存时出现问题 小弟保存模型用的语句: json_string = model.to_json() open('my_model_architecture.json', 'w').write(json_string) #保存网络结构 model.save_weights('my_model_weights.h5',overwrite='true') #保存权重 但是运行后会显示Process finished with exit code -1073741819 (0xC0000005) 然后保存权重的.h5文件没有内容 求助各位大神是怎么回事啊

keras model.fit函数报错,输入参数shape维度不正确,如何修正

使用函数 ``` model.fit(x=images, y=labels, validation_split=0.1, batch_size=batch_size, epochs=n_epochs, callbacks=callbacks, shuffle=True) ``` 由于我的训练集中image是灰色图片,所以images的shape为(2, 28, 28),导致报错Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (2, 28, 28) ,请问该如何处理

训练dnn网络,添加全连接层,keras报错

![图片说明](https://img-ask.csdn.net/upload/201804/09/1523244974_485144.png) 更改了keras的版本号,依然报错

keras模型的预测(predict)结果全是0

使用keras搭了一个模型并且对其进行了训练,得到模型在百度云盘中:链接:https://pan.baidu.com/s/1wQ5MLhPDfhwlveY-ib92Ew 密码:f3gk, 使用keras.predict时,无论模型输入什么输出都是0,代码如下: ```python from keras.models import Sequential, Model from keras.layers.convolutional_recurrent import ConvLSTM2D from keras.layers.normalization import BatchNormalization from keras.utils import plot_model from keras.models import load_model from keras import metrics import numpy as np import os import json import keras import matplotlib.pyplot as plt import math from keras import losses import shutil from keras import backend as K from keras import optimizers # 定义损失函数 def my_loss(y_true, y_pred): if not K.is_tensor(y_pred): y_pred = K.constant(y_pred, dtype = 'float64') y_true = K.cast(y_true, y_pred.dtype) return K.mean(K.abs((y_true - y_pred) / K.clip(K.abs(y_true), K.epsilon(), None))) # 定义评价函数metrics def mean_squared_percentage_error(y_true, y_pred): if not K.is_tensor(y_pred): y_pred = K.constant(y_pred, dtype = 'float64') y_true = K.cast(y_true, y_pred.dtype) return K.mean(K.square((y_pred - y_true)/K.clip(K.abs(y_true),K.epsilon(), None))) model_path = os.path.join('model/model' ,'model.h5') seq = load_model(model_path, custom_objects={'my_loss': my_loss,'mean_squared_percentage_error':mean_squared_percentage_error}) print (seq.summary()) input_data = np.random.random([1, 12, 56, 56, 1]) output_data = seq.predict(input_data, batch_size=16, verbose=1) print (output_data[0][:,:,0]) ``` 输出如下: ```python Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv_lst_m2d_1 (ConvLSTM2D) (None, None, 56, 56, 40) 59200 _________________________________________________________________ batch_normalization_1 (Batch (None, None, 56, 56, 40) 160 _________________________________________________________________ conv_lst_m2d_2 (ConvLSTM2D) (None, None, 56, 56, 40) 115360 _________________________________________________________________ batch_normalization_2 (Batch (None, None, 56, 56, 40) 160 _________________________________________________________________ conv_lst_m2d_3 (ConvLSTM2D) (None, 56, 56, 1) 1480 ================================================================= Total params: 176,360 Trainable params: 176,200 Non-trainable params: 160 None 1/1 [==============================] - 1s 812ms/step [[ 0. 0. 0. ... 0. 0. 0.] [ 0. 0. 0. ... 0. 0. 0.] [ 0. 0. 0. ... 0. 0. 0.] ... [ 0. 0. 0. ... 0. 0. 0.] [ 0. 0. 0. ... 0. 0. 0.] [ 0. 0. 0. ... 0. 0. -0.]] ``` 不懂为什么会这样,即便随机生成一组数据作为输入,结果也是这样

keras 并发load_model报错

我通过web代码实时加载模型进行预测,但报如下错误 Traceback (most recent call last): File "/root/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1997, in __call__ return self.wsgi_app(environ, start_response) File "/root/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1985, in wsgi_app response = self.handle_exception(e) File "/root/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1540, in handle_exception reraise(exc_type, exc_value, tb) File "/root/anaconda3/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise raise value File "/root/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1982, in wsgi_app response = self.full_dispatch_request() File "/root/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1614, in full_dispatch_request rv = self.handle_user_exception(e) File "/root/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1517, in handle_user_exception reraise(exc_type, exc_value, tb) File "/root/anaconda3/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise raise value File "/root/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1612, in full_dispatch_request rv = self.dispatch_request() File "/root/anaconda3/lib/python3.6/site-packages/flask/app.py", line 1598, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/root/anaconda3/code/App.py", line 41, in predict model=load_model(root_path+model_name) File "/root/anaconda3/lib/python3.6/site-packages/keras/models.py", line 249, in load_model topology.load_weights_from_hdf5_group(f['model_weights'], model.layers) File "/root/anaconda3/lib/python3.6/site-packages/keras/engine/topology.py", line 3008, in load_weights_from_hdf5_group K.batch_set_value(weight_value_tuples) File "/root/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2189, in batch_set_value get_session().run(assign_ops, feed_dict=feed_dict) File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 895, in run run_metadata_ptr) File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1071, in _run + e.args[0]) TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(1, 16), dtype=float32) is not an element of this graph.

使用keras画出模型准确率评估的执行结果时出现:

建立好深度学习的模型后,使用反向传播法进行训练。 定义了训练方式: ``` model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy']) ``` 执行训练: ``` train_history =model.fit(x=x_Train_normalize, y=y_Train_OneHot,validation_split=0.2, epochs=10,batch_size=200,verbose=2) ``` 执行后出现: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571243584_952792.png) 建立show_train_history显示训练过程: ``` import matplotlib.pyplot as plt def show_train_history(train_history,train,validation): plt.plot(train_history.history[train]) plt.plot(train_history.history[validation]) plt.title('Train History') plt.ylabel(train) plt.xlabel('Epoch') plt.legend(['train','validation'],loc='upper left') plt.show() ``` 画出准确率执行结果: ``` show_train_history(train_history,'acc','val_acc') ``` 结果出现以下问题: ![图片说明](https://img-ask.csdn.net/upload/201910/17/1571243832_179270.png) 这是怎么回事呀? 求求大佬救救孩子555

tensorflow环境下只要import keras 就会出现python已停止运行?

python小白在写代码的时候发现只要import keras就会出现python停止运行的情况,目前tensorflow版本1.2.1,keras版本2.1.1,防火墙关了也还是这样,具体代码和问题信息如下,请大神赐教。 ``` # -*- coding: utf-8 -*- import numpy as np from scipy.io import loadmat, savemat from keras.utils import np_utils 问题事件名称: BEX64 应用程序名: pythonw.exe 应用程序版本: 3.6.2150.1013 应用程序时间戳: 5970e8ca 故障模块名称: StackHash_1dc2 故障模块版本: 0.0.0.0 故障模块时间戳: 00000000 异常偏移: 0000000000000000 异常代码: c0000005 异常数据: 0000000000000008 OS 版本: 6.1.7601.2.1.0.256.1 区域设置 ID: 2052 其他信息 1: 1dc2 其他信息 2: 1dc22fb1de37d348f27e54dbb5278e7d 其他信息 3: eae3 其他信息 4: eae36a4b5ffb27c9d33117f4125a75c2 ```

使用keras搭的模型,训练时候,使用相同的loss和metrics,但是输出却不一样

keras搭的模型,训练时候,使用相同的loss和metrics,但是输出却不一样,为什么会出现这种情况呀

如果将keras情感分析模型应用到Java Web上,那Web后台怎么预处理字符串并转化为特征向量?

尚属初学折腾了一点简单的代码,但是很想知道怎么将训练模型应用到Web项目上。 训练模型时用了如下代码: ```python # 加载数据内容步骤省略 from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences tokenizer = Tokenizer() tokenizer.fit_on_texts(train_texts) train_sequences = tokenizer.texts_to_sequences(train_texts) test_sequences = tokenizer.texts_to_sequences(test_texts) train_data = pad_sequences(train_sequences, maxlen=MAX_SEQUENCE_LENGTH) test_data = pad_sequences(test_sequences, maxlen=MAX_SEQUENCE_LENGTH) ``` 因为tokenizer使用训练文本进行fit,记录了词典之类的信息,那我要在Web项目上调用模型预测文本的之前是否应该再用之前tokenizer里的信息去做预处理才对?那需要如此处理的话我在Web后台该怎么做?

ubuntu下调用keras报错:No module named 'error'

cuda9.0和TensorFlow1.8.0已安装 import tensorflow也没有问题,就是再import keras出错,求大神解答! 报错如下: Using TensorFlow backend. Traceback (most recent call last): File "/home/zhangzhiyang/PycharmProjects/tensorflow1/test_keras.py", line 2, in <module> import keras File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/__init__.py", line 3, in <module> from . import utils File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/utils/__init__.py", line 26, in <module> from .multi_gpu_utils import multi_gpu_model File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/utils/multi_gpu_utils.py", line 7, in <module> from ..layers.merge import concatenate File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/layers/__init__.py", line 4, in <module> from ..engine.base_layer import Layer File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/engine/__init__.py", line 7, in <module> from .network import get_source_inputs File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/keras/engine/network.py", line 9, in <module> import yaml File "/home/zhangzhiyang/anaconda3/envs/tensorflow/lib/python3.6/site-packages/yaml/__init__.py", line 2, in <module> from error import * ModuleNotFoundError: No module named 'error' 我的版本:tensorflow1.8.0,cuda9.0,cuDNN7,anaconda3,python3.6.5 我的tensorflow和keras安装路径均为anaconda3/envs/tensorflow/lib/python3.6/site-packages 我的.bashrc文件如下: export PATH="/home/zhangzhiyang/anaconda3/bin:$PATH" export LD_LIBRARY_PATH="/home/zhangzhiyang/newdisk/cuda-9.0/lib64:$LD_LIBRARY_PATH" export PATH="/home/zhangzhiyang/newdisk/cuda-9.0/bin:$PATH" export CUDA_HOME=$CUDA_HOME:"/home/zhangzhiyang/newdisk/cuda-9.0" 个人推测可能是python版本的问题,但不知如何解决,我第一次pip Keras未指定安装路径,结果keras安装在了python2.7下,这次我指定了路径为python3.6/site_packages,但是报了如上错误,是否keras不支持python3? 求大神解答!

keras使用报出OMP问题

楼主使用ubuntu16.04,采用anaconda3配置的tensorflow1.13.1和keras2.2.4 但之前使用还不报错,因为不可抗力重装环境之后出现如图所示的omp线程问题。困扰很久,百思不得其解,望诸位大神告知。 截图如下: ![图片说明](https://img-ask.csdn.net/upload/201904/18/1555586019_885202.png) 报出

如何利用Keras的函数式模型搭建一个局部连接的卷积神经网络模型?

最近在学习卷积神经网络模型,在对CNN鼻祖LeNet5进行构建时遇到了如下问题: 首先有这样一个连接模式: ![图片说明](https://img-ask.csdn.net/upload/201910/28/1572246925_411564.jpg) 需要由S2层的6个特征图谱生成C3层的16个特征图谱,但这16个map并不都是与上一层进行全连接卷积求和得到的 例如C3的map1只与S2的map1,2,3进行局部连接,卷积求和在加上一个bias就得到了C3的第一个特征图谱 那么这样的连接模式怎么使用Keras来表示呢? 首先考虑最简单的序贯模型,发现并没有相关的API可以用来指定上一层的某一部分特征图作为一下层的输入(也许是我没发现),然后考虑函数式模型: ``` import keras from keras.layers import Conv2D, MaxPooling2D, Input, Dense, Flatten from keras.models import Model input_LeNet5=Input(shape=(32,32,1)) c1=Conv2D(6,(5,5))(input_LeNet5) s2=MaxPooling2D((2,2))(c1) print(np.shape(s2)) ``` 这里我搭建出了LeNet5的前两层,并打印出了S2的形状,是一个(?,14,14,6)的张量,这里的6显然就是代表了S2中6张不同的map ``` TensorShape([Dimension(None), Dimension(14), Dimension(14), Dimension(6)]) ``` 那么是不是就可以考虑对张量的最后一维进行切片,如下,将S21作为c31的输入,代码是可以编译通过的 ``` s21=s2[:,:,:,0:3] c31=Conv2D(1,(5,5))(S21) ``` 但是最后调用Model对整个模型进行编译时就出错了 ``` model = Model(inputs=input_LeNet5, outputs=C31) ``` ``` AttributeError: 'NoneType' object has no attribute '_inbound_nodes' ``` 经过测试发现只要是对上一层的输入进行切片就会出现这样的问题,猜测是切片使得S21丢失了S2的数据类型以及属性 看了很多别人搭建的模型也没有涉及这一操作的,keras文档也没有相关描述。 特来请教有没有大牛搭建过类似的模型,不用keras也行

keras模型网格搜索调参求助

![图片说明](https://img-ask.csdn.net/upload/201908/20/1566283673_852822.png) 用sklearn的网格搜索,想对keras的GRU模型调参。。但是成功调了一组参数后,后面的参数调节时,死活不行,一直报错,提示非法参数。。。 求指点问题所在

如何调用GPU跑程序(keras框架)

我在学习神经网络 在做的时候我想让GPU来进行训练网络 所以如何调用GPU跑程序(keras框架)???

怎样用keras实现从预训练模型中提取多层特征?

![图片说明](https://img-ask.csdn.net/upload/201906/19/1560958477_965287.jpg) 我想从一个预训练的卷积神经网络的不同层中提取特征,然后把这些不同层的特征拼接在一起,实现如上图一样的网络结构,我写的代码如下 ``` base_model = VGGFace(model='resnet50', include_top=False) model1 = base_model model2 = base_model input1 = Input(shape=(197,197,3)) model1_out = model1.layers[-12].output model1_in = model1.layers[0].output model1 = Model(model1_in,model1_out) x1 = model1(input1) x1 = GlobalMaxPool2D()(x1) x2 = model2(input1) x2 = GlobalMaxPool2D()(x2) out = Concatenate(axis=-1)([x1,x2]) out = Dense(1,activation='sigmoid')(out) model3 = Model([input1,input2],out) from keras.utils import plot_model plot_model(model3,"model3.png") import matplotlib.pyplot as plt img = plt.imread('model3.png') plt.imshow(img) ``` 但模型可视化显示如下,两个网络的权值并不共享。![图片说明](https://img-ask.csdn.net/upload/201906/19/1560959263_500375.png)

keras pretrained模型应用于新的数据应当如何设置输入格式

网络模型为: ``` Model Summary: __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== img (InputLayer) (None, 427, 561, 3) 0 __________________________________________________________________________________________________ dmap (InputLayer) (None, 427, 561, 3) 0 __________________________________________________________________________________________________ padding1_1 (ZeroPadding2D) (None, 429, 563, 3) 0 img[0][0] __________________________________________________________________________________________________ padding1_1d (ZeroPadding2D) (None, 429, 563, 3) 0 dmap[0][0] __________________________________________________________________________________________________ conv1_1 (Conv2D) (None, 427, 561, 64) 1792 padding1_1[0][0] __________________________________________________________________________________________________ conv1_1d (Conv2D) (None, 427, 561, 64) 1792 padding1_1d[0][0] __________________________________________________________________________________________________ padding1_2 (ZeroPadding2D) (None, 429, 563, 64) 0 conv1_1[0][0] __________________________________________________________________________________________________ padding1_2d (ZeroPadding2D) (None, 429, 563, 64) 0 conv1_1d[0][0] __________________________________________________________________________________________________ conv1_2 (Conv2D) (None, 427, 561, 64) 36928 padding1_2[0][0] __________________________________________________________________________________________________ conv1_2d (Conv2D) (None, 427, 561, 64) 36928 padding1_2d[0][0] __________________________________________________________________________________________________ pool1 (MaxPooling2D) (None, 214, 281, 64) 0 conv1_2[0][0] __________________________________________________________________________________________________ pool1d (MaxPooling2D) (None, 214, 281, 64) 0 conv1_2d[0][0] __________________________________________________________________________________________________ padding2_1 (ZeroPadding2D) (None, 216, 283, 64) 0 pool1[0][0] __________________________________________________________________________________________________ padding2_1d (ZeroPadding2D) (None, 216, 283, 64) 0 pool1d[0][0] __________________________________________________________________________________________________ conv2_1 (Conv2D) (None, 214, 281, 128 73856 padding2_1[0][0] __________________________________________________________________________________________________ conv2_1d (Conv2D) (None, 214, 281, 128 73856 padding2_1d[0][0] __________________________________________________________________________________________________ padding2_2 (ZeroPadding2D) (None, 216, 283, 128 0 conv2_1[0][0] __________________________________________________________________________________________________ padding2_2d (ZeroPadding2D) (None, 216, 283, 128 0 conv2_1d[0][0] __________________________________________________________________________________________________ conv2_2 (Conv2D) (None, 214, 281, 128 147584 padding2_2[0][0] __________________________________________________________________________________________________ conv2_2d (Conv2D) (None, 214, 281, 128 147584 padding2_2d[0][0] __________________________________________________________________________________________________ pool2 (MaxPooling2D) (None, 107, 141, 128 0 conv2_2[0][0] __________________________________________________________________________________________________ pool2d (MaxPooling2D) (None, 107, 141, 128 0 conv2_2d[0][0] __________________________________________________________________________________________________ padding3_1 (ZeroPadding2D) (None, 109, 143, 128 0 pool2[0][0] __________________________________________________________________________________________________ padding3_1d (ZeroPadding2D) (None, 109, 143, 128 0 pool2d[0][0] __________________________________________________________________________________________________ conv3_1 (Conv2D) (None, 107, 141, 256 295168 padding3_1[0][0] __________________________________________________________________________________________________ conv3_1d (Conv2D) (None, 107, 141, 256 295168 padding3_1d[0][0] __________________________________________________________________________________________________ bn_conv3_1 (BatchNormalization) (None, 107, 141, 256 1024 conv3_1[0][0] __________________________________________________________________________________________________ bn_conv3_1d (BatchNormalization (None, 107, 141, 256 1024 conv3_1d[0][0] __________________________________________________________________________________________________ relu3_1 (Activation) (None, 107, 141, 256 0 bn_conv3_1[0][0] __________________________________________________________________________________________________ relu3_1d (Activation) (None, 107, 141, 256 0 bn_conv3_1d[0][0] __________________________________________________________________________________________________ padding3_2 (ZeroPadding2D) (None, 109, 143, 256 0 relu3_1[0][0] __________________________________________________________________________________________________ padding3_2d (ZeroPadding2D) (None, 109, 143, 256 0 relu3_1d[0][0] __________________________________________________________________________________________________ conv3_2 (Conv2D) (None, 107, 141, 256 590080 padding3_2[0][0] __________________________________________________________________________________________________ conv3_2d (Conv2D) (None, 107, 141, 256 590080 padding3_2d[0][0] __________________________________________________________________________________________________ bn_conv3_2 (BatchNormalization) (None, 107, 141, 256 1024 conv3_2[0][0] __________________________________________________________________________________________________ bn_conv3_2d (BatchNormalization (None, 107, 141, 256 1024 conv3_2d[0][0] __________________________________________________________________________________________________ relu3_2 (Activation) (None, 107, 141, 256 0 bn_conv3_2[0][0] __________________________________________________________________________________________________ relu3_2d (Activation) (None, 107, 141, 256 0 bn_conv3_2d[0][0] __________________________________________________________________________________________________ padding3_3 (ZeroPadding2D) (None, 109, 143, 256 0 relu3_2[0][0] __________________________________________________________________________________________________ padding3_3d (ZeroPadding2D) (None, 109, 143, 256 0 relu3_2d[0][0] __________________________________________________________________________________________________ conv3_3 (Conv2D) (None, 107, 141, 256 590080 padding3_3[0][0] __________________________________________________________________________________________________ conv3_3d (Conv2D) (None, 107, 141, 256 590080 padding3_3d[0][0] __________________________________________________________________________________________________ bn_conv3_3 (BatchNormalization) (None, 107, 141, 256 1024 conv3_3[0][0] __________________________________________________________________________________________________ bn_conv3_3d (BatchNormalization (None, 107, 141, 256 1024 conv3_3d[0][0] __________________________________________________________________________________________________ relu3_3 (Activation) (None, 107, 141, 256 0 bn_conv3_3[0][0] __________________________________________________________________________________________________ relu3_3d (Activation) (None, 107, 141, 256 0 bn_conv3_3d[0][0] __________________________________________________________________________________________________ pool3 (MaxPooling2D) (None, 54, 71, 256) 0 relu3_3[0][0] __________________________________________________________________________________________________ pool3d (MaxPooling2D) (None, 54, 71, 256) 0 relu3_3d[0][0] __________________________________________________________________________________________________ padding4_1 (ZeroPadding2D) (None, 56, 73, 256) 0 pool3[0][0] __________________________________________________________________________________________________ padding4_1d (ZeroPadding2D) (None, 56, 73, 256) 0 pool3d[0][0] __________________________________________________________________________________________________ conv4_1 (Conv2D) (None, 54, 71, 512) 1180160 padding4_1[0][0] __________________________________________________________________________________________________ conv4_1d (Conv2D) (None, 54, 71, 512) 1180160 padding4_1d[0][0] __________________________________________________________________________________________________ bn_conv4_1 (BatchNormalization) (None, 54, 71, 512) 2048 conv4_1[0][0] __________________________________________________________________________________________________ bn_conv4_1d (BatchNormalization (None, 54, 71, 512) 2048 conv4_1d[0][0] __________________________________________________________________________________________________ relu4_1 (Activation) (None, 54, 71, 512) 0 bn_conv4_1[0][0] __________________________________________________________________________________________________ relu4_1d (Activation) (None, 54, 71, 512) 0 bn_conv4_1d[0][0] __________________________________________________________________________________________________ padding4_2 (ZeroPadding2D) (None, 56, 73, 512) 0 relu4_1[0][0] __________________________________________________________________________________________________ padding4_2d (ZeroPadding2D) (None, 56, 73, 512) 0 relu4_1d[0][0] __________________________________________________________________________________________________ conv4_2 (Conv2D) (None, 54, 71, 512) 2359808 padding4_2[0][0] __________________________________________________________________________________________________ conv4_2d (Conv2D) (None, 54, 71, 512) 2359808 padding4_2d[0][0] __________________________________________________________________________________________________ bn_conv4_2 (BatchNormalization) (None, 54, 71, 512) 2048 conv4_2[0][0] __________________________________________________________________________________________________ bn_conv4_2d (BatchNormalization (None, 54, 71, 512) 2048 conv4_2d[0][0] __________________________________________________________________________________________________ relu4_2 (Activation) (None, 54, 71, 512) 0 bn_conv4_2[0][0] __________________________________________________________________________________________________ relu4_2d (Activation) (None, 54, 71, 512) 0 bn_conv4_2d[0][0] __________________________________________________________________________________________________ padding4_3 (ZeroPadding2D) (None, 56, 73, 512) 0 relu4_2[0][0] __________________________________________________________________________________________________ padding4_3d (ZeroPadding2D) (None, 56, 73, 512) 0 relu4_2d[0][0] __________________________________________________________________________________________________ conv4_3 (Conv2D) (None, 54, 71, 512) 2359808 padding4_3[0][0] __________________________________________________________________________________________________ conv4_3d (Conv2D) (None, 54, 71, 512) 2359808 padding4_3d[0][0] __________________________________________________________________________________________________ bn_conv4_3 (BatchNormalization) (None, 54, 71, 512) 2048 conv4_3[0][0] __________________________________________________________________________________________________ bn_conv4_3d (BatchNormalization (None, 54, 71, 512) 2048 conv4_3d[0][0] __________________________________________________________________________________________________ relu4_3 (Activation) (None, 54, 71, 512) 0 bn_conv4_3[0][0] __________________________________________________________________________________________________ relu4_3d (Activation) (None, 54, 71, 512) 0 bn_conv4_3d[0][0] __________________________________________________________________________________________________ pool4 (MaxPooling2D) (None, 27, 36, 512) 0 relu4_3[0][0] __________________________________________________________________________________________________ pool4d (MaxPooling2D) (None, 27, 36, 512) 0 relu4_3d[0][0] __________________________________________________________________________________________________ padding5_1 (ZeroPadding2D) (None, 29, 38, 512) 0 pool4[0][0] __________________________________________________________________________________________________ padding5_1d (ZeroPadding2D) (None, 29, 38, 512) 0 pool4d[0][0] __________________________________________________________________________________________________ conv5_1 (Conv2D) (None, 27, 36, 512) 2359808 padding5_1[0][0] __________________________________________________________________________________________________ conv5_1d (Conv2D) (None, 27, 36, 512) 2359808 padding5_1d[0][0] __________________________________________________________________________________________________ bn_conv5_1 (BatchNormalization) (None, 27, 36, 512) 2048 conv5_1[0][0] __________________________________________________________________________________________________ bn_conv5_1d (BatchNormalization (None, 27, 36, 512) 2048 conv5_1d[0][0] __________________________________________________________________________________________________ relu5_1 (Activation) (None, 27, 36, 512) 0 bn_conv5_1[0][0] __________________________________________________________________________________________________ relu5_1d (Activation) (None, 27, 36, 512) 0 bn_conv5_1d[0][0] __________________________________________________________________________________________________ padding5_2 (ZeroPadding2D) (None, 29, 38, 512) 0 relu5_1[0][0] __________________________________________________________________________________________________ padding5_2d (ZeroPadding2D) (None, 29, 38, 512) 0 relu5_1d[0][0] __________________________________________________________________________________________________ conv5_2 (Conv2D) (None, 27, 36, 512) 2359808 padding5_2[0][0] __________________________________________________________________________________________________ conv5_2d (Conv2D) (None, 27, 36, 512) 2359808 padding5_2d[0][0] __________________________________________________________________________________________________ bn_conv5_2 (BatchNormalization) (None, 27, 36, 512) 2048 conv5_2[0][0] __________________________________________________________________________________________________ bn_conv5_2d (BatchNormalization (None, 27, 36, 512) 2048 conv5_2d[0][0] __________________________________________________________________________________________________ relu5_2 (Activation) (None, 27, 36, 512) 0 bn_conv5_2[0][0] __________________________________________________________________________________________________ relu5_2d (Activation) (None, 27, 36, 512) 0 bn_conv5_2d[0][0] __________________________________________________________________________________________________ padding5_3 (ZeroPadding2D) (None, 29, 38, 512) 0 relu5_2[0][0] __________________________________________________________________________________________________ padding5_3d (ZeroPadding2D) (None, 29, 38, 512) 0 relu5_2d[0][0] __________________________________________________________________________________________________ conv5_3 (Conv2D) (None, 27, 36, 512) 2359808 padding5_3[0][0] __________________________________________________________________________________________________ conv5_3d (Conv2D) (None, 27, 36, 512) 2359808 padding5_3d[0][0] __________________________________________________________________________________________________ bn_conv5_3 (BatchNormalization) (None, 27, 36, 512) 2048 conv5_3[0][0] __________________________________________________________________________________________________ bn_conv5_3d (BatchNormalization (None, 27, 36, 512) 2048 conv5_3d[0][0] __________________________________________________________________________________________________ relu5_3 (Activation) (None, 27, 36, 512) 0 bn_conv5_3[0][0] __________________________________________________________________________________________________ rois (InputLayer) (None, 5) 0 __________________________________________________________________________________________________ relu5_3d (Activation) (None, 27, 36, 512) 0 bn_conv5_3d[0][0] __________________________________________________________________________________________________ rois_context (InputLayer) (None, 5) 0 __________________________________________________________________________________________________ pool5 (RoiPoolingConvSingle) (None, 7, 7, 512) 0 relu5_3[0][0] rois[0][0] __________________________________________________________________________________________________ pool5d (RoiPoolingConvSingle) (None, 7, 7, 512) 0 relu5_3d[0][0] rois[0][0] __________________________________________________________________________________________________ pool5_context (RoiPoolingConvSi (None, 7, 7, 512) 0 relu5_3[0][0] rois_context[0][0] __________________________________________________________________________________________________ pool5d_context (RoiPoolingConvS (None, 7, 7, 512) 0 relu5_3d[0][0] rois_context[0][0] __________________________________________________________________________________________________ flatten (Flatten) (None, 25088) 0 pool5[0][0] __________________________________________________________________________________________________ flatten_d (Flatten) (None, 25088) 0 pool5d[0][0] __________________________________________________________________________________________________ flatten_context (Flatten) (None, 25088) 0 pool5_context[0][0] __________________________________________________________________________________________________ flatten_d_context (Flatten) (None, 25088) 0 pool5d_context[0][0] __________________________________________________________________________________________________ concat (Concatenate) (None, 100352) 0 flatten[0][0] flatten_d[0][0] flatten_context[0][0] flatten_d_context[0][0] __________________________________________________________________________________________________ fc6 (Dense) (None, 4096) 411045888 concat[0][0] __________________________________________________________________________________________________ drop6 (Dropout) (None, 4096) 0 fc6[0][0] __________________________________________________________________________________________________ fc7 (Dense) (None, 4096) 16781312 drop6[0][0] __________________________________________________________________________________________________ drop7 (Dropout) (None, 4096) 0 fc7[0][0] __________________________________________________________________________________________________ cls_score (Dense) (None, 20) 81940 drop7[0][0] __________________________________________________________________________________________________ bbox_pred_3d (Dense) (None, 140) 573580 drop7[0][0] ================================================================================================== Total params: 457,942,816 Trainable params: 457,927,456 Non-trainable params: 15,360 ``` 模型定义的输入输出为: ``` tf_model = Model( inputs=[img, dmap, rois, rois_context], outputs=[cls_score, bbox_pred_3d] ) ``` 我在predict时采用的格式为: ``` roi2d = twod_Proposal('test1_rgb.jpg', 'q') #获得roi proposal print('------------------------------------------') # for roi in roi2d: # print(roi) # print('------------------------------------------') #TODO: Set input to the model img = cv2.imread('test1_rgb.jpg') dmap = cv2.imread('test1_depth.jpg') roi2d_context = get_roi_context(roi2d) tf_model = make_deng_tf_test() show_model_info(tf_model) tf_model.compile(loss='mean_squared_error', optimizer='adam', metrics=['acc']) [score, result_predict] = tf_model.predict([img, dmap, roi2d, roi2d_context]) ``` 报错信息为 ``` Using TensorFlow backend. Total Number of Region Proposals: 5012 ------------------------------------------ WARNING:tensorflow:From /Users/anaconda2/envs/python3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. 2019-07-01 09:51:26.089446: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2019-07-01 09:51:26.090541: I tensorflow/core/common_runtime/process_util.cc:71] Creating new thread pool with default inter op setting: 4. Tune using inter_op_parallelism_threads for best performance. WARNING:tensorflow:From /Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. Traceback (most recent call last): File "/Users/Documents/PycharmProjects/Amodal3Det_TF/tfmodel/model.py", line 375, in <module> [score, result_predict] = tf_model.predict([img, dmap, roi2d, roi2d_context]) File "/Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/engine/training.py", line 1149, in predict x, _, _ = self._standardize_user_data(x) File "/Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/engine/training.py", line 751, in _standardize_user_data exception_prefix='input') File "/Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/engine/training_utils.py", line 92, in standardize_input_data data = [standardize_single_array(x) for x in data] File "/Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/engine/training_utils.py", line 92, in <listcomp> data = [standardize_single_array(x) for x in data] File "/Users/anaconda2/envs/python3/lib/python3.6/site-packages/keras/engine/training_utils.py", line 27, in standardize_single_array elif x.ndim == 1: AttributeError: 'list' object has no attribute 'ndim' Process finished with exit code 1 ``` 请问各位问题大概出在哪里?

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

技术大佬:我去,你写的 switch 语句也太老土了吧

昨天早上通过远程的方式 review 了两名新来同事的代码,大部分代码都写得很漂亮,严谨的同时注释也很到位,这令我非常满意。但当我看到他们当中有一个人写的 switch 语句时,还是忍不住破口大骂:“我擦,小王,你丫写的 switch 语句也太老土了吧!” 来看看小王写的代码吧,看完不要骂我装逼啊。 private static String createPlayer(PlayerTypes p...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

Intellij IDEA 实用插件安利

1. 前言从2020 年 JVM 生态报告解读 可以看出Intellij IDEA 目前已经稳坐 Java IDE 头把交椅。而且统计得出付费用户已经超过了八成(国外统计)。IDEA 的...

女程序员,为什么比男程序员少???

昨天看到一档综艺节目,讨论了两个话题:(1)中国学生的数学成绩,平均下来看,会比国外好?为什么?(2)男生的数学成绩,平均下来看,会比女生好?为什么?同时,我又联想到了一个技术圈经常讨...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

魂迁光刻,梦绕芯片,中芯国际终获ASML大型光刻机

据羊城晚报报道,近日中芯国际从荷兰进口的一台大型光刻机,顺利通过深圳出口加工区场站两道闸口进入厂区,中芯国际发表公告称该光刻机并非此前盛传的EUV光刻机,主要用于企业复工复产后的生产线扩容。 我们知道EUV主要用于7nm及以下制程的芯片制造,光刻机作为集成电路制造中最关键的设备,对芯片制作工艺有着决定性的影响,被誉为“超精密制造技术皇冠上的明珠”,根据之前中芯国际的公报,目...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

男生更看重女生的身材脸蛋,还是思想?

往往,我们看不进去大段大段的逻辑。深刻的哲理,往往短而精悍,一阵见血。问:产品经理挺漂亮的,有点心动,但不知道合不合得来。男生更看重女生的身材脸蛋,还是...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

Python爬虫,高清美图我全都要(彼岸桌面壁纸)

爬取彼岸桌面网站较为简单,用到了requests、lxml、Beautiful Soup4

差点跪了...

最近微信又搞出了一个大利器,甚至都上了热搜,当然消息最敏捷的自媒体人,纷纷都开通了自己的视频号。01 视频号是什么呢?视频号是微信体系内的短视频,它不同...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

Vue回炉重造之router路由(更新中)

你好,我是Vam的金豆之路,可以叫我豆哥。2019年年度博客之星、技术领域博客专家。主要领域:前端开发。我的微信是 maomin9761,有什么疑问可以加我哦,自己创建了一个微信技术交流群,可以加我邀请你一起交流学习。最后自己也创建了一个微信公众号,里面的文章是我自己精挑细选的文章,主要介绍各种IT新技术。欢迎关注哦,微信搜索:臻美IT,等你来。 欢迎阅读本博文,本博文主要讲述【】,文字通...

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

Java岗开发3年,公司临时抽查算法,离职后这几题我记一辈子

前几天我们公司做了一件蠢事,非常非常愚蠢的事情。我原以为从学校出来之后,除了找工作有测试外,不会有任何与考试有关的事儿。 但是,天有不测风云,公司技术总监、人事总监两位大佬突然降临到我们事业线,叫上我老大,给我们组织了一场别开生面的“考试”。 那是一个风和日丽的下午,我翘着二郎腿,左手端着一杯卡布奇诺,右手抓着我的罗技鼠标,滚动着轮轴,穿梭在头条热点之间。 “淡黄的长裙~蓬松的头发...

大胆预测下未来5年的Web开发

在2019年的ReactiveConf 上,《Elm in Action》的作者Richard Feldman对未来5年Web开发的发展做了预测,很有意思,分享给大家。如果你有机会从头...

立即提问
相关内容推荐