【Tensorflow2.0】Tensorflow2.0版本可以使用object_dectectionAPI吗

我电脑上安装的是tensorflow2.0版本,在配置object-dectection API时出现了AttributeError: module 'tensorflow' has no attribute 'contrib'的问题,请懂的老师帮忙解答一下,十分感谢

2个回答

tensorflow2.X版本中没有contrib了,可以尝试使用修改或者使用1.1X(X>=2)版本.

按照官网文档的说法,是可以的

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md

>=1.12,并没有说2.0不可以。
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Tensorflow2.0--AttributeError: 'list' object has no attribute '_in_graph_mode'--求助各位大佬

使用TensorFlow2.0识别MNIST数据集,单隐藏层神经网络,运行后报如下图片错误: ![图片说明](https://img-ask.csdn.net/upload/202005/26/1590450739_260038.png) 变量定义方式如下: ``` # 定义隐藏层权重和偏置项变量 Input_Dim = 784 H1_NN = 64 # 隐藏层有64个神经元 w1 = tf.Variable(tf.random.normal([Input_Dim, H1_NN]), dtype=tf.float32) b1 = tf.Variable(tf.zeros([H1_NN]), dtype=tf.float32) # 定义输出层权重和偏置项变量 Output_Dim = 10 w2 = tf.Variable(tf.random.normal([H1_NN, Output_Dim]), dtype=tf.float32) b2 = tf.Variable(tf.zeros([Output_Dim]), dtype=tf.float32) W = [w1, w2] B = [b1, b2] ```

tensorflow 保存的PB模型200+MB 怎么处理

下面的是模型保存密码的部分 ```python def save_model(sess, epoch): builder = tf.saved_model.builder.SavedModelBuilder("model-v1.0_%d" % epoch) builder.add_meta_graph_and_variables(sess, ['v1.0']) builder.save() ```

在训练Tensorflow模型(object_detection)时,训练在第一次评估后退出,怎么使训练继续下去?

当我进行ssd模型训练时,训练进行了10分钟,然后进入评估阶段,评估之后程序就自动退出了,没有看到误和警告,这是为什么,怎么让程序一直训练下去? 训练命令: ``` python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr ``` 配置文件: ``` training exit after the first evaluation(only one evaluation) in Tensorflow model(object_detection) without error and waring System information What is the top-level directory of the model you are using:models/research/object_detection/ Have I written custom code (as opposed to using a stock example script provided in TensorFlow):NO OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Windows-10(64bit) TensorFlow installed from (source or binary):conda install tensorflow-gpu TensorFlow version (use command below):1.13.1 Bazel version (if compiling from source):N/A CUDA/cuDNN version:cudnn-7.6.0 GPU model and memory:GeForce GTX 1060 6GB Exact command to reproduce:See below my command for training : python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr This is my config : train_config { batch_size: 24 data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { ssd_random_crop { } } optimizer { rms_prop_optimizer { learning_rate { exponential_decay_learning_rate { initial_learning_rate: 0.00400000018999 decay_steps: 800720 decay_factor: 0.949999988079 } } momentum_optimizer_value: 0.899999976158 decay: 0.899999976158 epsilon: 1.0 } } fine_tune_checkpoint: "D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt" from_detection_checkpoint: true num_steps: 200000 train_input_reader { label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt" tf_record_input_reader { input_path: "D:/gitcode/models/research/object_detection/idol/train/Iframe_??????.tfrecord" } } eval_config { num_examples: 8000 max_evals: 10 use_moving_averages: false } eval_input_reader { label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt" shuffle: false num_readers: 1 tf_record_input_reader { input_path: "D:/gitcode/models/research/object_detection/idol/eval/Iframe_??????.tfrecord" } ``` 窗口输出: (default) D:\gitcode\models\research>python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md https://github.com/tensorflow/addons If you depend on functionality not listed there, please file an issue. WARNING:tensorflow:Forced number of epochs for all eval validations to be 1. WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered eval_on_train_input_config.num_epochs = 0. Overwriting num_epochs to 1. WARNING:tensorflow:Estimator's model_fn (<function create_model_fn..model_fn at 0x0000027CBAB7BB70>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:86: parallel_interleave (from tensorflow.contrib.data.python.ops.interleave_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.experimental.parallel_interleave(...). WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\core\preprocessor.py:196: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version. Instructions for updating: seed2 arg is deprecated.Use sample_distorted_bounding_box_v2 instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:158: batch_and_drop_remainder (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.Dataset.batch(..., drop_remainder=True). WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\losses\losses_impl.py:448: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\array_grad.py:425: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. 2019-08-14 16:29:31.607841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7845 pciBusID: 0000:04:00.0 totalMemory: 6.00GiB freeMemory: 4.97GiB 2019-08-14 16:29:31.621836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-08-14 16:29:32.275712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-14 16:29:32.283072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-08-14 16:29:32.288675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-08-14 16:29:32.293514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1) WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\eval_util.py:796: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\utils\visualization_utils.py:498: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. Instructions for updating: tf.py_func is deprecated in TF V2. Instead, use tf.py_function, which takes a python function which manipulates tf eager tensors instead of numpy arrays. It's easy to convert a tf eager tensor to an ndarray (just call tensor.numpy()) but having access to eager tensors means tf.py_functions can use accelerators such as GPUs as well as being differentiable using a gradient tape. 2019-08-14 16:41:44.736212: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-08-14 16:41:44.741242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-14 16:41:44.747522: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-08-14 16:41:44.751256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-08-14 16:41:44.755548: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1) WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\training\saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. creating index... index created! creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=2.43s). Accumulating evaluation results... DONE (t=0.14s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.287 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.529 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.278 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.031 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.312 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.162 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.061 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.384 (default) D:\gitcode\models\research>

TensorFlow 自编码器 placeholder错误

``` import numpy as np import tensorflow as tf def xavier_init(fan_in, fan_out, constant=1): low = -constant * np.sqrt(6.0 / (fan_in + fan_out)) high = constant * np.sqrt(6.0 / (fan_in + fan_out)) return tf.random_uniform((fan_in, fan_out), minval=low, maxval=high, dtype=tf.float32) class AdditiveGaussionNoiseAutoencoder(object): def __init__(self, n_input, n_hidden, transfer_function=tf.nn.relu, optimizer=tf.train.AdamOptimizer(), scale=0.1): self.n_input = n_input self.n_hidden = n_hidden self.transfer = transfer_function self.scale = tf.placeholder(tf.float32) self.training_scale = scale network_weights = self._initialize_weights() self.weights = network_weights self.x = tf.placeholder(tf.float32, [None, self.n_input]) self.hidden = self.transfer(tf.add(tf.matmul( self.x + scale * tf.random_normal((n_input,)), self.weights['w1']), self.weights['b1'])) self.reconstruction = tf.add(tf.matmul(self.hidden, self.weights['w2']), self.weights['b2']) self.cost = tf.sqrt(tf.reduce_mean(tf.pow(tf.subtract( self.reconstruction, self.x), 2.0))) self.optimizer = optimizer.minimize(self.cost) init = tf.global_variables_initializer() self.sess = tf.Session() self.sess.run(init) def _initialize_weights(self): all_weights = dict() all_weights['w1'] = tf.Variable(xavier_init(self.n_input, self.n_hidden)) all_weights['b1'] = tf.Variable(tf.zeros([self.n_hidden], dtype=tf.float32)) all_weights['w2'] = tf.Variable(tf.zeros([self.n_hidden, self.n_input], dtype=tf.float32)) all_weights['b2'] = tf.Variable(tf.zeros([self.n_input], dtype=tf.float32)) return all_weights def partial_fit(self, X): cost, opt = self.sess.run((self.cost, self.optimizer), feed_dict={self.x: X, self.scale: self.training_scale}) return cost def calc_total_cost(self, X): return self.sess.run(self.cost, feed_dict={self.x: X, self.scale: self.training_scale}) def transform(self, X): return self.sess.run(self.hidden, feed_dict={self.x: X, self.scale: self.training_scale}) def generate(self, hidden=None): if hidden is None: hidden = np.random.normal(size=self.weights['b1']) return self.sess.run(self.reconstruction, feed_dict={self.hidden: hidden}) def reconstruct(self, X): return self.sess.run(self.reconstruction, feed_dict={self.x: X, self.scale: self.training_scale}) def getweights(self): return self.sess.run(self.weights['w1']) def getbiases(self): return self.sess.run(self.weights['b1']) ``` ``` import numpy as np import tensorflow as tf from DSAE import AdditiveGaussionNoiseAutoencoder import xlrd import sklearn.preprocessing as prep #数据读取,可转换为csv文件,好处理,参见ConvertData train_input = "/Users/Patrick/Desktop/traffic_data/train_500010092_input.xls" train_output = "/Users/Patrick/Desktop/traffic_data/train_500010092_output.xls" test_input = "/Users/Patrick/Desktop/traffic_data/test_500010092_input.xls" test_output = "/Users/Patrick/Desktop/traffic_data/test_500010092_output.xls" book_train_input = xlrd.open_workbook(train_input, encoding_override='utf-8') book_train_output = xlrd.open_workbook(train_output, encoding_override='utf-8') book_test_input = xlrd.open_workbook(test_input, encoding_override='utf-8') book_test_output = xlrd.open_workbook(test_output, encoding_override='utf-8') sheet_train_input = book_train_input.sheet_by_index(0) sheet_train_output = book_train_output.sheet_by_index(0) sheet_test_input = book_test_input.sheet_by_index(0) sheet_test_output = book_test_output.sheet_by_index(0) data_train_input = np.asarray([sheet_train_input.row_values(i) for i in range(2, sheet_train_input.nrows)]) data_train_output = np.asarray(([sheet_train_output.row_values(i) for i in range(2, sheet_train_output.ncols)])) data_test_input = np.asarray([sheet_test_input.row_values(i) for i in range(2, sheet_test_input.nrows)]) data_test_output = np.asarray(([sheet_test_output.row_values(i) for i in range(2, sheet_test_output.ncols)])) def standard_scale(X_train, X_test): preprocessor=prep.StandardScaler().fit(X_train) X_train=preprocessor.transform(X_train) X_test=preprocessor.transform(X_test) return X_train, X_test X_train, X_test = standard_scale(data_train_input, data_test_input) def get_block_form_data(data, batch_size, k): #start_index =0 start_index = k * batch_size return data[start_index:(start_index+batch_size)] training_epochs = 20 batch_size = 288 n_samples = sheet_test_output.nrows display_step = 1 stack_size = 3 hidden_size = [10, 8, 10] sdae = [] for i in range(stack_size): if i == 0: ae = AdditiveGaussionNoiseAutoencoder(n_input=12, n_hidden=hidden_size[i], transfer_function=tf.nn.relu, optimizer=tf.train.AdamOptimizer(learning_rate=0.01), scale=0.01) ae._initialize_weights() sdae.append(ae) else: ae = AdditiveGaussionNoiseAutoencoder(n_input=hidden_size[i-1], n_hidden=hidden_size[i], transfer_function=tf.nn.relu, optimizer=tf.train.AdamOptimizer(learning_rate=0.01), scale=0.01) ae._initialize_weights() sdae.append(ae) W = [] b = [] hidden_feacture = [] X_train = np.array([0]) for j in range(stack_size): if j == 0: X_train = data_train_input X_test = data_test_input else: X_train_pre = X_train X_train = sdae[j-1].transform(X_train_pre) print(X_train.shape) hidden_feacture.append(X_train) for epoch in range(training_epochs): avg_cost = 0. total_batch = int(n_samples / batch_size) for i in range(total_batch): batch_xs = get_block_form_data(X_train, batch_size, i) cost = sdae[j].partial_fit(batch_xs) avg_cost += cost / n_samples * batch_size if epoch % display_step == 0: print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(avg_cost)) weight = sdae[j].getweights() W.append(weight) print(np.shape(W)) b.append(sdae[j].getbiases()) print(np.shape(b)) ``` 然后报错如下: ``` File "/Applications/PyCharm.app/Contents/helpers/pydev/pydev_run_in_console.py", line 53, in run_file pydev_imports.execfile(file, globals, locals) # execute the script File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/Patrick/PycharmProjects/DSAE-SVM/DLmain.py", line 80, in <module> X_train = sdae[j-1].transform(X_train_pre) File "/Users/Patrick/PycharmProjects/DSAE-SVM/DSAE.py", line 70, in transform feed_dict={self.x: X, self.scale: self.training_scale}) File "/Users/Patrick/anaconda3/envs/tensorflow/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 905, in run run_metadata_ptr) File "/Users/Patrick/anaconda3/envs/tensorflow/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 1113, in _run str(subfeed_t.get_shape()))) ValueError: Cannot feed value of shape (18143, 3) for Tensor 'Placeholder_1:0', which has shape '(?, 12)' PyDev console: starting. Python 3.4.5 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:47:57) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin ``` 实在是不知道该如何修改palceholder的shape 求帮忙讲解

Tensorflow object detection API 训练自己数据时报错 Windows fatal exception: access violation

python3.6, tf 1.14.0,Tensorflow object detection API 跑demo图片和改为摄像头进行物体识别均正常, 训练自己的数据训练自己数据时报错 Windows fatal exception: access violation 用的ssd_mobilenet_v1_coco_2018_01_28模型, 命令:python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr 其实就是按照网上基础的训练来的,一直报这个,具体错误输出如下: (py36) D:\pythonpro\TensorFlowLearn\face_tf_model>python model_main.py -pipeline_config_path=/pre_model/pipeline.config -model_dir=result -num_train_steps=2000 -alsologtostderr WARNING: Logging before flag parsing goes to stderr. W0622 16:50:30.230578 14180 lazy_loader.py:50] The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue. W0622 16:50:30.317274 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\inception_resnet_v2.py:373: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead. W0622 16:50:30.355400 14180 deprecation_wrapper.py:119] From D:\Anaconda3\libdata\tf_models\research\slim\nets\mobilenet\mobilenet.py:397: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead. W0622 16:50:30.388313 14180 deprecation_wrapper.py:119] From model_main.py:109: The name tf.app.run is deprecated. Please use tf.compat.v1.app.run instead. W0622 16:50:30.397290 14180 deprecation_wrapper.py:119] From D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py:98: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead. Windows fatal exception: access violation Current thread 0x00003764 (most recent call first): File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 84 in _preread_check File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 122 in read File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\utils\config_util.py", line 99 in get_configs_from_pipeline_file File "D:\Anaconda3\envs\py36\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\model_lib.py", line 606 in create_estimator_and_inputs File "model_main.py", line 71 in main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 251 in _run_main File "D:\Anaconda3\envs\py36\lib\site-packages\absl\app.py", line 300 in run File "D:\Anaconda3\envs\py36\lib\site-packages\tensorflow\python\platform\app.py", line 40 in run File "model_main.py", line 109 in <module> (py36) D:\pythonpro\TensorFlowLearn\face_tf_model> 请大神指点下

Tensorflow object detection API 使用VOC数据集出现错误。

环境:Win7+Anaconda+Python3.6+tensorflow 1.12.0 在进行目标检测时,运行train.py,跳转到array____ops.py在903行出错, ``` if ops.is_dense_tensor_like(elem): if dtype is not None and elem.dtype.base_dtype != dtype: raise TypeError("Cannot convert a list containing a tensor of dtype " "%s to %s (Tensor is: %r)" % (elem.dtype, dtype, elem)) converted_elems.append(elem) must_pack = True elif isinstance(elem, (list, tuple)): converted_elem = _autopacking_helper(elem, dtype, str(i)) if ops.is_dense_tensor_like(converted_elem): must_pack = True converted_elems.append(converted_elem) else: converted_elems.append(elem) ``` 错误显示为: TypeError: Cannot convert a list containing a tensor of dtype <dtype: 'int32'> to <dtype: 'float32'> (Tensor is: <tf.Tensor 'Preprocessor/stack_1:0' shape=(1, 3) dtype=int32>) 有没有遇到相同问题的,怎么解决啊,找了好多,都没有遇到靠谱的。

Tensorflow object detection api 训练自己数据 map一直是 -1

使用Tensorflow object detection api 训练自己的数据 map 一直是-1.loss一直也很低。 结果是这样的: ![图片说明](https://img-ask.csdn.net/upload/201910/14/1571048294_159662.png) loss: ![图片说明](https://img-ask.csdn.net/upload/201910/15/1571105856_282377.jpg) 使用的模型是:model zoo的这个![图片说明](https://img-ask.csdn.net/upload/201910/14/1571047856_17157.jpg) piplineConfig 如下: ``` model { faster_rcnn { num_classes: 25 image_resizer { keep_aspect_ratio_resizer { min_dimension: 720 max_dimension: 1280 } } feature_extractor { type: "faster_rcnn_resnet50" first_stage_features_stride: 16 } first_stage_anchor_generator { grid_anchor_generator { height_stride: 16 width_stride: 16 scales: 0.25 scales: 0.5 scales: 1.0 scales: 2.0 aspect_ratios: 0.5 aspect_ratios: 1.0 aspect_ratios: 2.0 } } first_stage_box_predictor_conv_hyperparams { op: CONV regularizer { l2_regularizer { weight: 0.0 } } initializer { truncated_normal_initializer { stddev: 0.00999999977648 } } } first_stage_nms_score_threshold: 0.0 first_stage_nms_iou_threshold: 0.699999988079 first_stage_max_proposals: 100 first_stage_localization_loss_weight: 2.0 first_stage_objectness_loss_weight: 1.0 initial_crop_size: 14 maxpool_kernel_size: 2 maxpool_stride: 2 second_stage_box_predictor { mask_rcnn_box_predictor { fc_hyperparams { op: FC regularizer { l2_regularizer { weight: 0.0 } } initializer { variance_scaling_initializer { factor: 1.0 uniform: true mode: FAN_AVG } } } use_dropout: false dropout_keep_probability: 1.0 } } second_stage_post_processing { batch_non_max_suppression { score_threshold: 0.300000011921 iou_threshold: 0.600000023842 max_detections_per_class: 100 max_total_detections: 100 } score_converter: SOFTMAX } second_stage_localization_loss_weight: 2.0 second_stage_classification_loss_weight: 1.0 } } train_config { batch_size: 1 data_augmentation_options { random_horizontal_flip { } } optimizer { momentum_optimizer { learning_rate { manual_step_learning_rate { initial_learning_rate: 0.000300000014249 schedule { step: 900000 learning_rate: 2.99999992421e-05 } schedule { step: 1200000 learning_rate: 3.00000010611e-06 } } } momentum_optimizer_value: 0.899999976158 } use_moving_average: false } gradient_clipping_by_norm: 10.0 fine_tune_checkpoint: "/home/yons/code/自动驾驶视觉综合感知/faster_rcnn_resnet50_coco_2018_01_28/model.ckpt" from_detection_checkpoint: true num_steps: 200000 } train_input_reader { label_map_path: "/home/yons/code/自动驾驶视觉综合感知/pascal_label_map.pbtxt" tf_record_input_reader { input_path: "/home/yons/data/自动驾驶视觉综合感知/train_dataset/tfRecord/train/coco_train.record" } } eval_config { num_examples: 200 max_evals: 10 use_moving_averages: false metrics_set: "coco_detection_metrics" } eval_input_reader { label_map_path: "/home/yons/code/自动驾驶视觉综合感知/pascal_label_map.pbtxt" shuffle: false num_readers: 1 tf_record_input_reader { input_path: "/home/yons/data/自动驾驶视觉综合感知/train_dataset/tfRecord/val/coco_val.record" } } ``` label_map配置: ``` item { id: 1 name: 'red' } item { id: 2 name: 'green' } item { id: 3 name: 'yellow' } item { id: 4 name: 'red_left' } item { id: 5 name: 'red_right' } item { id: 6 name: 'yellow_left' } item { id: 7 name: 'yellow_right' } item { id: 8 name: 'green_left' } item { id: 9 name: 'green_right' } item { id: 10 name: 'red_forward' } item { id: 11 name: 'green_forward' } item { id: 12 name: 'yellow_forward' } item { id:13 name: 'horizon_red' } item { id: 14 name: 'horizon_green' } item { id: 15 name: 'horizon_yellow' } item { id: 16 name: 'off' } item { id: 17 name: 'traffic_sign' } item { id: 18 name: 'car' } item { id: 19 name: 'motor' } item { id: 20 name: 'bike' } item { id: 21 name: 'bus' } item { id: 22 name: 'truck' } item { id: 23 name: 'suv' } item { id: 24 name: 'express' } item { id: 25 name: 'person' } ``` 自己解析数据tfrecord: ![图片说明](https://img-ask.csdn.net/upload/201910/14/1571048123_764545.png) ![图片说明](https://img-ask.csdn.net/upload/201910/14/1571048152_258259.png)

Python Tensorflow中dense问题

tf.layers.dense中units的参数设定依据什么规则?是维数越大越精确吗?刚刚开始学,希望能细讲下谢谢

tensorflow训练自己的数据集

tensorflow框架下进行VGG网络的训练,训练自己的数据集,生成模型,损失函数的计算一直保持0.69,这是为什么,应该怎样调整?

关于tensorflow训练自己的tfrecord数据集问题

import os import tensorflow as tf from PIL import Image import matplotlib.pyplot as plt import readfileTFRecord import input_data_record def weight_varible(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) print("Loading Done!") sess = tf.InteractiveSession() # paras W_conv1 = weight_varible([5, 5, 1, 32]) b_conv1 = bias_variable([32]) # conv layer-1 x = tf.placeholder(tf.float32, [None, 784]) x_image = tf.reshape(x, [-1, 28, 28, 1]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) h_pool1 = max_pool_2x2(h_conv1) # conv layer-2 W_conv2 = weight_varible([5, 5, 32, 64]) b_conv2 = bias_variable([64]) h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_2x2(h_conv2) # full connection W_fc1 = weight_varible([7 * 7 * 64, 1024]) b_fc1 = bias_variable([1024]) h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) # dropout keep_prob = tf.placeholder(tf.float32) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) # output layer: softmax W_fc2 = weight_varible([1024, 10]) b_fc2 = bias_variable([10]) y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) y_ = tf.placeholder(tf.float32, [None, 10]) # model training cross_entropy = -tf.reduce_sum(y_ * tf.log(y_conv)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.arg_max(y_conv, 1), tf.arg_max(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) sess.run(tf.initialize_all_variables()) img, label = readfileTFRecord.read_and_decode("train_min.tfrecords") img_batch, label_batch = tf.train.shuffle_batch([img, label], batch_size=3, capacity=30, min_after_dequeue=9) #img_batch,label_batch = input_data_record.get_batch(img,label,28,28,3,30) init = tf.initialize_all_variables() #with tf.Session() as sess: sess.run(init) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess,coord=coord) try: for i in range(30): if coord.should_stop(): break val, l= sess.run([img_batch, label_batch]) #l = to_categorical(l, 12) train_accuacy = accuracy.eval(feed_dict={x: val, y_: l, keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, train_accuacy)) sess.graph.finalize() train_step.run(feed_dict = {x: val, y_: l, keep_prob: 1.0}) print(val.shape, l) except tf.errors.OutOfRangeError: print('Done training --epoch limit reached') finally: coord.request_stop() coord.join(threads) sess.close() 报错: ValueError: Cannot feed value of shape (3, 28, 28, 1) for Tensor u'Placeholder:0', which has shape '(?, 784)'

tensorflow 目标检测,获得包围盒常规坐标的定位信息

目标检测ssd和fast rcnn等算法可以识别并定位物体,可是应该如何在框出目标物体时,能显示物体中心或者边框的xy常规坐标,实现了一些代码,但存在问题,求大神帮忙 ``` def run_inference_for_single_image(image, graph): with graph.as_default(): with tf.Session() as sess: # 获得图中所有op ops = tf.get_default_graph().get_operations() # 获得输出op的名字 all_tensor_names = {output.name for op in ops for output in op.outputs} tensor_dict = {} for key in [ 'num_detections', 'detection_boxes', 'detection_scores', 'detection_classes', 'detection_masks' ]: tensor_name = key + ':0' # 如果tensor_name在all_tensor_names中 if tensor_name in all_tensor_names: # 则获取到该tensor tensor_dict[key] = tf.get_default_graph().get_tensor_by_name( tensor_name) if 'detection_masks' in tensor_dict: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size. real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32) detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1]) detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1]) detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks( detection_masks, detection_boxes, image.shape[1], image.shape[2]) detection_masks_reframed = tf.cast( tf.greater(detection_masks_reframed, 0.5), tf.uint8) # Follow the convention by adding back the batch dimension tensor_dict['detection_masks'] = tf.expand_dims( detection_masks_reframed, 0) # 图片输入的tensor image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0') # 传入图片运行模型获得结果 output_dict = sess.run(tensor_dict, feed_dict={image_tensor: image}) # 所有的结果都是float32类型的,有些数据需要做数据格式转换 # 检测到目标的数量 output_dict['num_detections'] = int(output_dict['num_detections'][0]) # 目标的类型 output_dict['detection_classes'] = output_dict[ 'detection_classes'][0].astype(np.uint8) # 预测框坐标 output_dict['detection_boxes'] = output_dict['detection_boxes'][0] # 预测框置信度 output_dict['detection_scores'] = output_dict['detection_scores'][0] boxes = np.squeeze(output_dict['detection_boxes']) scores = np.squeeze(output_dict['detection_scores']) #set a min thresh score, say 0.8 min_score_thresh = 0.8 bboxes = boxes[scores > min_score_thresh] #get image size im_width, im_height = image.size final_box = [] for box in range(bboxes): ymin, xmin, ymax, xmax = box final_box.append([xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height]) return output_dict ``` #for root,dirs,files in os.walk('test_images/'): for root,dirs,files in os.walk('test/'): for image_path in files: # 读取图片 image = Image.open(os.path.join(root,image_path)) # 把图片数据变成3维的数据,定义数据类型为uint8 image_np = load_image_into_numpy_array(image) # 增加一个维度,数据变成: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) # 目标检测 output_dict = run_inference_for_single_image(image_np_expanded, detection_graph) # 给原图加上预测框,置信度和类别信息 vis_util.visualize_boxes_and_labels_on_image_array( image_np, output_dict['detection_boxes'], output_dict['detection_classes'], output_dict['detection_scores'], category_index, use_normalized_coordinates=True, line_thickness=8) # 画图 # print ("box : ", final_box) plt.figure(figsize=(12,8)) plt.imshow(image_np) plt.axis('off') plt.show() ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-32205908683b> in <module> 9 image_np_expanded = np.expand_dims(image_np, axis=0) 10 # 目标检测 ---> 11 output_dict = run_inference_for_single_image(image_np_expanded, detection_graph) 12 # 给原图加上预测框,置信度和类别信息 13 vis_util.visualize_boxes_and_labels_on_image_array( <ipython-input-23-2044b0b101cc> in run_inference_for_single_image(image, graph) 56 bboxes = boxes[scores > min_score_thresh] 57 #get image size ---> 58 im_width, im_height = image.size 59 final_box = [] 60 for box in range(bboxes): TypeError: 'int' object is not iterable ```![图片说明](https://img-ask.csdn.net/upload/201908/25/1566710175_880656.jpg)![图片说明](https://img-ask.csdn.net/upload/201908/25/1566710262_253891.jpg)

module 'tensorflow' has no attribute 'flags'有大佬知道这是为啥报错吗?

import tensorflow as tf flags = tf.flags 结果报上述的错了,改成flags = tf.app.flags, 就报module 'tensorflow' has no attribute 'app'的错, 我的tensorflow是最新版本的,应该不是版本的问题吧。

tensorflow里面什么养的类型可以用.next_batch?

Python里普通类型的矩阵、tensor类型的矩阵或者、tensorDataset类型都有,但是不能.next__batch,求问需要转成什么类型才能用来.next__batch? 原数据是从csv文件里读取的string类型转成float类型后获得的,然后想用这些数据feed变量的时候,出现了上面的疑问,有没有更好的简单的方法?网上查了好像不太能解答。。

horovod.tensorflow的问题

我在python3.7.5上直接在文件夹导入horovod包,然后cmd下载tensorflow,分别检测,没有问题。 但是import tensorflow as tf import horovod.tensorflow as hvd pycharm提示说没有horovod.tensorflow这个模块,后面的代码都没办法运行 我试过改tensorflow版本,1.15和2.0都试过,但同样是不行, 想问问有没有人遇到相同的问题,但解决了的。

用tensorflow做机器翻译时训练代码有问题

``` # -*- coding:UTF-8 -*- import tensorflow as tf src_path = 'D:/Python37/untitled1/train.tags.en-zh.en.deletehtml' trg_path = 'D:/Python37/untitled1/train.tags.en-zh.zh.deletehtml' SRC_TRAIN_DATA = 'D:/Python37/untitled1/train.tags.en-zh.en.deletehtml.segment' # 源语言输入文件 TRG_TRAIN_DATA = 'D:/Python37/untitled1/train.tags.en-zh.zh.deletehtml.segment' # 目标语言输入文件 CHECKPOINT_PATH = './model/seq2seq_ckpt' # checkpoint保存路径 HIDDEN_SIZE = 1024 # LSTM的隐藏层规模 NUM_LAYERS = 2 # 深层循环神经网络中LSTM结构的层数 SRC_VOCAB_SIZE = 10000 # 源语言词汇表大小 TRG_VOCAB_SIZE = 4000 # 目标语言词汇表大小 BATCH_SIZE = 100 # 训练数据batch的大小 NUM_EPOCH = 5 # 使用训练数据的轮数 KEEP_PROB = 0.8 # 节点不被dropout的概率 MAX_GRAD_NORM = 5 # 用于控制梯度膨胀的梯度大小上限 SHARE_EMB_AND_SOFTMAX = True # 在softmax层和词向量层之间共享参数 MAX_LEN = 50 # 限定句子的最大单词数量 SOS_ID = 1 # 目标语言词汇表中<sos>的ID """ function: 数据batching,产生最后输入数据格式 Parameters: file_path-数据路径 Returns: dataset- 每个句子-对应的长度组成的TextLineDataset类的数据集对应的张量 """ def MakeDataset(file_path): dataset = tf.data.TextLineDataset(file_path) # map(function, sequence[, sequence, ...]) -> list # 通过定义可以看到,这个函数的第一个参数是一个函数,剩下的参数是一个或多个序列,返回值是一个集合。 # function可以理解为是一个一对一或多对一函数,map的作用是以参数序列中的每一个元素调用function函数,返回包含每次function函数返回值的list。 # lambda argument_list: expression # 其中lambda是Python预留的关键字,argument_list和expression由用户自定义 # argument_list参数列表, expression 为函数表达式 # 根据空格将单词编号切分开并放入一个一维向量 dataset = dataset.map(lambda string: tf.string_split([string]).values) # 将字符串形式的单词编号转化为整数 dataset = dataset.map(lambda string: tf.string_to_number(string, tf.int32)) # 统计每个句子的单词数量,并与句子内容一起放入Dataset dataset = dataset.map(lambda x: (x, tf.size(x))) return dataset """ function: 从源语言文件src_path和目标语言文件trg_path中分别读取数据,并进行填充和batching操作 Parameters: src_path-源语言,即被翻译的语言,英语. trg_path-目标语言,翻译之后的语言,汉语. batch_size-batch的大小 Returns: dataset- 每个句子-对应的长度 组成的TextLineDataset类的数据集 """ def MakeSrcTrgDataset(src_path, trg_path, batch_size): # 首先分别读取源语言数据和目标语言数据 src_data = MakeDataset(src_path) trg_data = MakeDataset(trg_path) # 通过zip操作将两个Dataset合并为一个Dataset,现在每个Dataset中每一项数据ds由4个张量组成 # ds[0][0]是源句子 # ds[0][1]是源句子长度 # ds[1][0]是目标句子 # ds[1][1]是目标句子长度 #https://blog.csdn.net/qq_32458499/article/details/78856530这篇博客看一下可以细致了解一下Dataset这个库,以及.map和.zip的用法 dataset = tf.data.Dataset.zip((src_data, trg_data)) # 删除内容为空(只包含<eos>)的句子和长度过长的句子 def FilterLength(src_tuple, trg_tuple): ((src_input, src_len), (trg_label, trg_len)) = (src_tuple, trg_tuple) # tf.logical_and 相当于集合中的and做法,后面两个都为true最终结果才会为true,否则为false # tf.greater Returns the truth value of (x > y),所以以下所说的是句子长度必须得大于一也就是不能为空的句子 # tf.less_equal Returns the truth value of (x <= y),所以所说的是长度要小于最长长度 src_len_ok = tf.logical_and(tf.greater(src_len, 1), tf.less_equal(src_len, MAX_LEN)) trg_len_ok = tf.logical_and(tf.greater(trg_len, 1), tf.less_equal(trg_len, MAX_LEN)) return tf.logical_and(src_len_ok, trg_len_ok) #两个都满足才返回true # filter接收一个函数Func并将该函数作用于dataset的每个元素,根据返回值True或False保留或丢弃该元素,True保留该元素,False丢弃该元素 # 最后得到的就是去掉空句子和过长的句子的数据集 dataset = dataset.filter(FilterLength) # 解码器需要两种格式的目标句子: # 1.解码器的输入(trg_input), 形式如同'<sos> X Y Z' # 2.解码器的目标输出(trg_label), 形式如同'X Y Z <eos>' # 上面从文件中读到的目标句子是'X Y Z <eos>'的形式,我们需要从中生成'<sos> X Y Z'形式并加入到Dataset # 编码器只有输入,没有输出,而解码器有输入也有输出,输入为<sos>+(除去最后一位eos的label列表) # 例如train.en最后都为2,id为2就是eos def MakeTrgInput(src_tuple, trg_tuple): ((src_input, src_len), (trg_label, trg_len)) = (src_tuple, trg_tuple) # tf.concat用法 https://blog.csdn.net/qq_33431368/article/details/79429295 trg_input = tf.concat([[SOS_ID], trg_label[:-1]], axis=0) return ((src_input, src_len), (trg_input, trg_label, trg_len)) dataset = dataset.map(MakeTrgInput) # 随机打乱训练数据 dataset = dataset.shuffle(10000) # 规定填充后的输出的数据维度 padded_shapes = ( (tf.TensorShape([None]), # 源句子是长度未知的向量 tf.TensorShape([])), # 源句子长度是单个数字 (tf.TensorShape([None]), # 目标句子(解码器输入)是长度未知的向量 tf.TensorShape([None]), # 目标句子(解码器目标输出)是长度未知的向量 tf.TensorShape([])) # 目标句子长度(输出)是单个数字 ) # 调用padded_batch方法进行padding 和 batching操作 batched_dataset = dataset.padded_batch(batch_size, padded_shapes) return batched_dataset """ function: seq2seq模型 Parameters: Returns: """ class NMTModel(object): """ function: 模型初始化 Parameters: Returns: """ def __init__(self): # 定义编码器和解码器所使用的LSTM结构 self.enc_cell = tf.nn.rnn_cell.MultiRNNCell( [tf.nn.rnn_cell.LSTMCell(HIDDEN_SIZE) for _ in range(NUM_LAYERS)]) self.dec_cell = tf.nn.rnn_cell.MultiRNNCell( [tf.nn.rnn_cell.LSTMCell(HIDDEN_SIZE) for _ in range(NUM_LAYERS)]) # 为源语言和目标语言分别定义词向量 self.src_embedding = tf.get_variable('src_emb', [SRC_VOCAB_SIZE, HIDDEN_SIZE]) self.trg_embedding = tf.get_variable('trg_emb', [TRG_VOCAB_SIZE, HIDDEN_SIZE]) # 定义softmax层的变量 if SHARE_EMB_AND_SOFTMAX: self.softmax_weight = tf.transpose(self.trg_embedding) else: self.softmax_weight = tf.get_variable('weight', [HIDDEN_SIZE, TRG_VOCAB_SIZE]) self.softmax_bias = tf.get_variable('softmax_loss', [TRG_VOCAB_SIZE]) """ function: 在forward函数中定义模型的前向计算图 Parameters:   MakeSrcTrgDataset函数产生的五种张量如下(全部为张量) src_input: 编码器输入(源数据) src_size : 输入大小 trg_input:解码器输入(目标数据) trg_label:解码器输出(目标数据) trg_size: 输出大小 Returns: """ def forward(self, src_input, src_size, trg_input, trg_label, trg_size): batch_size = tf.shape(src_input)[0] # 将输入和输出单词转为词向量(rnn中输入数据都要转换成词向量) # 相当于input中的每个id对应的embedding中的向量转换 src_emb = tf.nn.embedding_lookup(self.src_embedding, src_input) trg_emb = tf.nn.embedding_lookup(self.trg_embedding, trg_input) # 在词向量上进行dropout src_emb = tf.nn.dropout(src_emb, KEEP_PROB) trg_emb = tf.nn.dropout(trg_emb, KEEP_PROB) # 使用dynamic_rnn构造编码器 # 编码器读取源句子每个位置的词向量,输出最后一步的隐藏状态enc_state # 因为编码器是一个双层LSTM,因此enc_state是一个包含两个LSTMStateTuple类的tuple, # 每个LSTMStateTuple对应编码器中一层的状态 # enc_outputs是顶层LSTM在每一步的输出,它的维度是[batch_size, max_time, HIDDEN_SIZE] # seq2seq模型中不需要用到enc_outputs,而attention模型会用到它 with tf.variable_scope('encoder'): enc_outputs, enc_state = tf.nn.dynamic_rnn(self.enc_cell, src_emb, src_size, dtype=tf.float32) # 使用dynamic_rnn构造解码器 # 解码器读取目标句子每个位置的词向量,输出的dec_outputs为每一步顶层LSTM的输出 # dec_outputs的维度是[batch_size, max_time, HIDDEN_SIZE] # initial_state=enc_state表示用编码器的输出来初始化第一步的隐藏状态 # 编码器最后编码结束最后的状态为解码器初始化的状态 with tf.variable_scope('decoder'): dec_outputs, _ = tf.nn.dynamic_rnn(self.dec_cell, trg_emb, trg_size, initial_state=enc_state) # 计算解码器每一步的log perplexity # 输出重新转换成shape为[,HIDDEN_SIZE] output = tf.reshape(dec_outputs, [-1, HIDDEN_SIZE]) # 计算解码器每一步的softmax概率值 logits = tf.matmul(output, self.softmax_weight) + self.softmax_bias # 交叉熵损失函数,算loss loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.reshape(trg_label, [-1]), logits=logits) # 在计算平均损失时,需要将填充位置的权重设置为0,以避免无效位置的预测干扰模型的训练 label_weights = tf.sequence_mask(trg_size, maxlen=tf.shape(trg_label)[1], dtype=tf.float32) label_weights = tf.reshape(label_weights, [-1]) cost = tf.reduce_sum(loss * label_weights) cost_per_token = cost / tf.reduce_sum(label_weights) # 定义反向传播操作 trainable_variables = tf.trainable_variables() # 控制梯度大小,定义优化方法和训练步骤 # 算出每个需要更新的值的梯度,并对其进行控制 grads = tf.gradients(cost / tf.to_float(batch_size), trainable_variables) grads, _ = tf.clip_by_global_norm(grads, MAX_GRAD_NORM) # 利用梯度下降优化算法进行优化.学习率为1.0 optimizer = tf.train.GradientDescentOptimizer(learning_rate=1.0) # 相当于minimize的第二步,正常来讲所得到的list[grads,vars]由compute_gradients得到,返回的是执行对应变量的更新梯度操作的op train_op = optimizer.apply_gradients(zip(grads, trainable_variables)) return cost_per_token, train_op """ function: 使用给定的模型model上训练一个epoch,并返回全局步数,每训练200步便保存一个checkpoint Parameters: session : 会议 cost_op : 计算loss的操作op train_op: 训练的操作op saver:  保存model的类 step:   训练步数 Returns: """ def run_epoch(session, cost_op, train_op, saver, step): # 训练一个epoch # 重复训练步骤直至遍历完Dataset中所有数据 while True: try: # 运行train_op并计算cost_op的结果也就是损失值,训练数据在main()函数中以Dataset方式提供 cost, _ = session.run([cost_op, train_op]) # 步数为10的倍数进行打印 if step % 10 == 0: print('After %d steps, per token cost is %.3f' % (step, cost)) # 每200步保存一个checkpoint if step % 200 == 0: saver.save(session, CHECKPOINT_PATH, global_step=step) step += 1 except tf.errors.OutOfRangeError: break return step """ function: 主函数 Parameters: Returns: """ def main(): # 定义初始化函数 initializer = tf.random_uniform_initializer(-0.05, 0.05) # 定义训练用的循环神经网络模型 with tf.variable_scope('nmt_model', reuse=None, initializer=initializer): train_model = NMTModel() # 定义输入数据 data = MakeSrcTrgDataset(SRC_TRAIN_DATA, TRG_TRAIN_DATA, BATCH_SIZE) iterator = data.make_initializable_iterator() (src, src_size), (trg_input, trg_label, trg_size) = iterator.get_next() # 定义前向计算图,输入数据以张量形式提供给forward函数 cost_op, train_op = train_model.forward(src, src_size, trg_input, trg_label, trg_size) # 训练模型 # 保存模型 saver = tf.train.Saver() step = 0 with tf.Session() as sess: # 初始化全部变量 tf.global_variables_initializer().run() # 进行NUM_EPOCH轮数 for i in range(NUM_EPOCH): print('In iteration: %d' % (i + 1)) sess.run(iterator.initializer) step = run_epoch(sess, cost_op, train_op, saver, step) if __name__ == '__main__': main() ``` 问题如下,不知道怎么解决,谢谢! Traceback (most recent call last): File "D:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call return fn(*args) File "D:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "D:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: StringToNumberOp could not correctly convert string: This [[{{node StringToNumber}}]] [[{{node IteratorGetNext}}]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:/Python37/untitled1/train_model.py", line 277, in <module> main() File "D:/Python37/untitled1/train_model.py", line 273, in main step = run_epoch(sess, cost_op, train_op, saver, step) File "D:/Python37/untitled1/train_model.py", line 231, in run_epoch cost, _ = session.run([cost_op, train_op]) File "D:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 929, in run run_metadata_ptr) File "D:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run feed_dict_tensor, options, run_metadata) File "D:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run run_metadata) File "D:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: StringToNumberOp could not correctly convert string: This [[{{node StringToNumber}}]] [[node IteratorGetNext (defined at D:/Python37/untitled1/train_model.py:259) ]]

使用tesorflow中model_main.py遇到的问题!

直接上代码了 ``` D:\tensorflow\models\research\object_detection>python model_main.py --pipeline_config_path=E:\python_demo\pedestrian_demo\pedestrian_train\models\pipeline.config --model_dir=E:\python_demo\pedestrian_demo\pedestrian_train\models\train --num_train_steps=5000 --sample_1_of_n_eval_examples=1 --alsologstderr Traceback (most recent call last): File "model_main.py", line 109, in <module> tf.app.run() File "C:\anaconda\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run _sys.exit(main(argv)) File "model_main.py", line 71, in main FLAGS.sample_1_of_n_eval_on_train_examples)) File "D:\ssd-detection\models-master\research\object_detection\model_lib.py", line 589, in create_estimator_and_inputs pipeline_config_path, config_override=config_override) File "D:\ssd-detection\models-master\research\object_detection\utils\config_util.py", line 98, in get_configs_from_pipeline_file text_format.Merge(proto_str, pipeline_config) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 574, in Merge descriptor_pool=descriptor_pool) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 631, in MergeLines return parser.MergeLines(lines, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 654, in MergeLines self._ParseOrMerge(lines, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 676, in _ParseOrMerge self._MergeField(tokenizer, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 768, in _MergeField (message_descriptor.full_name, name)) google.protobuf.text_format.ParseError: 35:7 : Message type "object_detection.protos.SsdFeatureExtractor" has no field named "batch_norm_trainable". ``` 这个错误怎么解决,求大神指导~

如何解决tensorflow中的图片维度转换问题?

这是读取图片的代码。 def extract_data(): imgs=[] training_size, img_train_array,img_train_map_array= read_train_from_txt_file(train_txt_filename) for i in range(0,training_size): image_filename = img_train_array[i] if os.path.isfile(image_filename): print('Loading:'+ image_filename) img_file = cv.imread(image_filename) img_file=np.array(img_file) imgs.append(img_file) else: print('File' + image_filename + 'does not exist!') num_img = len(imgs) img_patches = [img_crop(imgs[i]) for i in range(num_img)] data = [img_patches[i][j] for i in range(len(img_patches)) for j in range(len(img_patches[i]))] return np.asarray(data) 这是调用的函数: train_data=extract_data() train_data_2=np.array(train_data) train_data_final=tf.reshape(train_data_2,[None,IMG_PATCH_SIZE,IMG_PATCH_SIZE,3]) train_label=extract_labels() train_label_2=np.array(train_label) train_label_final=tf.reshape(train_label_2,[None,NUM_LABEL]) 可是提示出现如下错误: TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [None, 16, 16, 3]. Consider casting elements to a supported type. 我已经转换为了asarray,为什么还是list类型啊? 新手求解啊,真的挺急的!!!!

为什么同样的问题用Tensorflow和keras实现结果不一样?

**cifar-10分类问题,同样的模型结构以及损失函数还有学习率参数等超参数,分别用TensorFlow和keras实现。 20个epochs后在测试集上进行预测,准确率总是差好几个百分点,不知道问题出在哪里?代码如下: 这个是TF的代码:** import tensorflow as tf import numpy as np import pickle as pk tf.reset_default_graph() batch_size = 64 test_size = 10000 img_size = 32 num_classes = 10 training_epochs = 10 test_size=200 ############################################################################### def unpickle(filename): '''解压数据''' with open(filename, 'rb') as f: d = pk.load(f, encoding='latin1') return d def onehot(labels): '''one-hot 编码''' n_sample = len(labels) n_class = max(labels) + 1 onehot_labels = np.zeros((n_sample, n_class)) onehot_labels[np.arange(n_sample), labels] = 1 return onehot_labels # 训练数据集 data1 = unpickle('data_batch_1') data2 = unpickle('data_batch_2') data3 = unpickle('data_batch_3') data4 = unpickle('data_batch_4') data5 = unpickle('data_batch_5') X_train = np.concatenate((data1['data'], data2['data'], data3['data'], data4['data'], data5['data']), axis=0)/255.0 y_train = np.concatenate((data1['labels'], data2['labels'], data3['labels'], data4['labels'], data5['labels']), axis=0) y_train = onehot(y_train) # 测试数据集 test = unpickle('test_batch') X_test = test['data']/255.0 y_test = onehot(test['labels']) del test,data1,data2,data3,data4,data5 ############################################################################### w = tf.Variable(tf.random_normal([5, 5, 3, 32], stddev=0.01)) w_c= tf.Variable(tf.random_normal([32* 16* 16, 512], stddev=0.1)) w_o =tf.Variable(tf.random_normal([512, num_classes], stddev=0.1)) def init_bias(shape): return tf.Variable(tf.constant(0.0, shape=shape)) b=init_bias([32]) b_c=init_bias([512]) b_o=init_bias([10]) def model(X, w, w_c,w_o, p_keep_conv, p_keep_hidden,b,b_c,b_o): conv1 = tf.nn.conv2d(X, w,strides=[1, 1, 1, 1],padding='SAME')#32x32x32 conv1=tf.nn.bias_add(conv1,b) conv1 = tf.nn.relu(conv1) conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME')#16x16x32 conv1 = tf.nn.dropout(conv1, p_keep_conv) FC_layer = tf.reshape(conv1, [-1, 32 * 16 * 16]) out_layer=tf.matmul(FC_layer, w_c)+b_c out_layer=tf.nn.relu(out_layer) out_layer = tf.nn.dropout(out_layer, p_keep_hidden) result = tf.matmul(out_layer, w_o)+b_o return result trX, trY, teX, teY = X_train,y_train,X_test,y_test trX = trX.reshape(-1, img_size, img_size, 3) teX = teX.reshape(-1, img_size, img_size, 3) X = tf.placeholder("float", [None, img_size, img_size, 3]) Y = tf.placeholder("float", [None, num_classes]) p_keep_conv = tf.placeholder("float") p_keep_hidden = tf.placeholder("float") py_x = model(X, w, w_c,w_o, p_keep_conv, p_keep_hidden,b,b_c,b_o) Y_ = tf.nn.softmax_cross_entropy_with_logits_v2(logits=py_x, labels=Y) cost = tf.reduce_mean(Y_) optimizer = tf.train.RMSPropOptimizer(0.001, 0.9).minimize(cost) predict_op = tf.argmax(py_x, 1) with tf.Session() as sess: tf.global_variables_initializer().run() for i in range(training_epochs): training_batch = zip(range(0, len(trX),batch_size),range(batch_size, len(trX)+1,batch_size)) perm=np.arange(len(trX)) np.random.shuffle(perm) trX=trX[perm] trY=trY[perm] for start, end in training_batch: sess.run(optimizer, feed_dict={X: trX[start:end],Y: trY[start:end],p_keep_conv:0.75,p_keep_hidden: 0.5}) test_batch = zip(range(0, len(teX),test_size),range(test_size, len(teX)+1,test_size)) accuracyResult=0 for start, end in test_batch: accuracyResult=accuracyResult+sum(np.argmax(teY[start:end], axis=1) ==sess.run(predict_op, feed_dict={X: teX[start:end],Y: teY[start:end],p_keep_conv: 1,p_keep_hidden: 1})) print(i, accuracyResult/10000) **这个是keras代码:** from keras import initializers from keras.datasets import cifar10 from keras.utils import np_utils from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Conv2D, MaxPooling2D from keras.optimizers import SGD, Adam, RMSprop #import matplotlib.pyplot as plt # CIFAR_10 is a set of 60K images 32x32 pixels on 3 channels IMG_CHANNELS = 3 IMG_ROWS = 32 IMG_COLS = 32 #constant BATCH_SIZE = 64 NB_EPOCH = 10 NB_CLASSES = 10 VERBOSE = 1 VALIDATION_SPLIT = 0 OPTIM = RMSprop() #load dataset (X_train, y_train), (X_test, y_test) = cifar10.load_data() #print('X_train shape:', X_train.shape) #print(X_train.shape[0], 'train samples') #print(X_test.shape[0], 'test samples') # convert to categorical Y_train = np_utils.to_categorical(y_train, NB_CLASSES) Y_test = np_utils.to_categorical(y_test, NB_CLASSES) # float and normalization X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 # network model = Sequential() model.add(Conv2D(32, (3, 3), padding='same',input_shape=(IMG_ROWS, IMG_COLS, IMG_CHANNELS),kernel_initializer=initializers.random_normal(stddev=0.01),bias_initializer=initializers.Zeros())) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) #0<参数<1才会有用 model.add(Flatten()) model.add(Dense(512,kernel_initializer=initializers.random_normal(stddev=0.1),bias_initializer=initializers.Zeros())) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(NB_CLASSES,kernel_initializer=initializers.random_normal(stddev=0.1),bias_initializer=initializers.Zeros())) model.add(Activation('softmax')) model.summary() # train model.compile(loss='categorical_crossentropy', optimizer=OPTIM,metrics=['accuracy']) model.fit(X_train, Y_train, batch_size=BATCH_SIZE,epochs=NB_EPOCH, validation_split=VALIDATION_SPLIT,verbose=VERBOSE) score = model.evaluate(X_test, Y_test,batch_size=200, verbose=VERBOSE) print("Test score:", score[0]) print('Test accuracy:', score[1])

基于keras,使用imagedatagenerator.flow函数读入数据,训练集ACC极低

在做字符识别的神经网络,数据集是用序号标好名称的图片,标签取图片的文件名。想用Imagedatagenrator 函数和flow函数,增加样本的泛化性,然后生成数据传入网络,可是这样acc=1/类别数,基本为零。请问哪里出了问题 ``` datagen = ImageDataGenerator( width_shift_range=0.1, height_shift_range=0.1 ) def read_train_image(self, name): myimg = Image.open(name).convert('RGB') return np.array(myimg) def train(self): #训练集 train_img_list = [] train_label_list = [] #测试集 test_img_list = [] test_label_list = [] for file in os.listdir('train'): files_img_in_array = self.read_train_image(name='train/' + file) train_img_list.append(files_img_in_array) # Image list add up train_label_list.append(int(file.split('_')[0])) # lable list addup for file in os.listdir('test'): files_img_in_array = self.read_train_image(name='test/' + file) test_img_list.append(files_img_in_array) # Image list add up test_label_list.append(int(file.split('_')[0])) # lable list addup train_img_list = np.array(train_img_list) train_label_list = np.array(train_label_list) test_img_list = np.array(train_img_list) test_label_list = np.array(train_label_list) train_label_list = np_utils.to_categorical(train_label_list, 5788) test_label_list = np_utils.to_categorical(test_label_list, 5788) train_img_list = train_img_list.astype('float32') test_img_list = test_img_list.astype('float32') test_img_list /= 255.0 train_img_list /= 255.0 ``` 这是图片数据的处理,图片和标签都存到list里。下面是用fit_genrator训练 ``` model.fit_generator( self.datagen.flow(x=train_img_list, y=train_label_list, batch_size=2), samples_per_epoch=len(train_img_list), epochs=10, validation_data=(test_img_list,test_label_list), ) ```

MySQL 8.0.19安装教程(windows 64位)

话不多说直接开干 目录 1-先去官网下载点击的MySQL的下载​ 2-配置初始化的my.ini文件的文件 3-初始化MySQL 4-安装MySQL服务 + 启动MySQL 服务 5-连接MySQL + 修改密码 先去官网下载点击的MySQL的下载 下载完成后解压 解压完是这个样子 配置初始化的my.ini文件的文件 ...

Python+OpenCV计算机视觉

Python+OpenCV计算机视觉系统全面的介绍。

Vue.js 2.0之全家桶系列视频课程

基于新的Vue.js 2.3版本, 目前新全的Vue.js教学视频,让你少走弯路,直达技术前沿! 1. 包含Vue.js全家桶(vue.js、vue-router、axios、vuex、vue-cli、webpack、ElementUI等) 2. 采用笔记+代码案例的形式讲解,通俗易懂

navicat(内含激活码)

navicat支持mysql的可视化操作,内涵激活码,不用再忍受弹框的痛苦。

HTML期末大作业

这是我自己做的HTML期末大作业,花了很多时间,稍加修改就可以作为自己的作业了,而且也可以作为学习参考

150讲轻松搞定Python网络爬虫

【为什么学爬虫?】 &nbsp; &nbsp; &nbsp; &nbsp;1、爬虫入手容易,但是深入较难,如何写出高效率的爬虫,如何写出灵活性高可扩展的爬虫都是一项技术活。另外在爬虫过程中,经常容易遇到被反爬虫,比如字体反爬、IP识别、验证码等,如何层层攻克难点拿到想要的数据,这门课程,你都能学到! &nbsp; &nbsp; &nbsp; &nbsp;2、如果是作为一个其他行业的开发者,比如app开发,web开发,学习爬虫能让你加强对技术的认知,能够开发出更加安全的软件和网站 【课程设计】 一个完整的爬虫程序,无论大小,总体来说可以分成三个步骤,分别是: 网络请求:模拟浏览器的行为从网上抓取数据。 数据解析:将请求下来的数据进行过滤,提取我们想要的数据。 数据存储:将提取到的数据存储到硬盘或者内存中。比如用mysql数据库或者redis等。 那么本课程也是按照这几个步骤循序渐进的进行讲解,带领学生完整的掌握每个步骤的技术。另外,因为爬虫的多样性,在爬取的过程中可能会发生被反爬、效率低下等。因此我们又增加了两个章节用来提高爬虫程序的灵活性,分别是: 爬虫进阶:包括IP代理,多线程爬虫,图形验证码识别、JS加密解密、动态网页爬虫、字体反爬识别等。 Scrapy和分布式爬虫:Scrapy框架、Scrapy-redis组件、分布式爬虫等。 通过爬虫进阶的知识点我们能应付大量的反爬网站,而Scrapy框架作为一个专业的爬虫框架,使用他可以快速提高我们编写爬虫程序的效率和速度。另外如果一台机器不能满足你的需求,我们可以用分布式爬虫让多台机器帮助你快速爬取数据。 &nbsp; 从基础爬虫到商业化应用爬虫,本套课程满足您的所有需求! 【课程服务】 专属付费社群+每周三讨论会+1v1答疑

三个项目玩转深度学习(附1G源码)

从事大数据与人工智能开发与实践约十年,钱老师亲自见证了大数据行业的发展与人工智能的从冷到热。事实证明,计算机技术的发展,算力突破,海量数据,机器人技术等,开启了第四次工业革命的序章。深度学习图像分类一直是人工智能的经典任务,是智慧零售、安防、无人驾驶等机器视觉应用领域的核心技术之一,掌握图像分类技术是机器视觉学习的重中之重。针对现有线上学习的特点与实际需求,我们开发了人工智能案例实战系列课程。打造:以项目案例实践为驱动的课程学习方式,覆盖了智能零售,智慧交通等常见领域,通过基础学习、项目案例实践、社群答疑,三维立体的方式,打造最好的学习效果。

基于STM32的电子时钟设计

时钟功能 还有闹钟功能,温湿度功能,整点报时功能 你值得拥有

学生成绩管理系统(PHP + MYSQL)

做的是数据库课程设计,使用的php + MySQL,本来是黄金搭配也就没啥说的,推荐使用wamp服务器,里面有详细的使用说明,带有界面的啊!呵呵 不行的话,可以给我留言!

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

程序员的兼职技能课

获取讲师答疑方式: 在付费视频第一节(触摸命令_ALL)片头有二维码及加群流程介绍 限时福利 原价99元,今日仅需39元!购课添加小助手(微信号:itxy41)按提示还可领取价值800元的编程大礼包! 讲师介绍: 苏奕嘉&nbsp;前阿里UC项目工程师 脚本开发平台官方认证满级(六级)开发者。 我将如何教会你通过【定制脚本】赚到你人生的第一桶金? 零基础程序定制脚本开发课程,是完全针对零脚本开发经验的小白而设计,课程内容共分为3大阶段: ①前期将带你掌握Q开发语言和界面交互开发能力; ②中期通过实战来制作有具体需求的定制脚本; ③后期将解锁脚本的更高阶玩法,打通任督二脉; ④应用定制脚本合法赚取额外收入的完整经验分享,带你通过程序定制脚本开发这项副业,赚取到你的第一桶金!

实用主义学Python(小白也容易上手的Python实用案例)

原价169,限时立减100元! 系统掌握Python核心语法16点,轻松应对工作中80%以上的Python使用场景! 69元=72讲+源码+社群答疑+讲师社群分享会&nbsp; 【哪些人适合学习这门课程?】 1)大学生,平时只学习了Python理论,并未接触Python实战问题; 2)对Python实用技能掌握薄弱的人,自动化、爬虫、数据分析能让你快速提高工作效率; 3)想学习新技术,如:人工智能、机器学习、深度学习等,这门课程是你的必修课程; 4)想修炼更好的编程内功,优秀的工程师肯定不能只会一门语言,Python语言功能强大、使用高效、简单易学。 【超实用技能】 从零开始 自动生成工作周报 职场升级 豆瓣电影数据爬取 实用案例 奥运冠军数据分析 自动化办公:通过Python自动化分析Excel数据并自动操作Word文档,最终获得一份基于Excel表格的数据分析报告。 豆瓣电影爬虫:通过Python自动爬取豆瓣电影信息并将电影图片保存到本地。 奥运会数据分析实战 简介:通过Python分析120年间奥运会的数据,从不同角度入手分析,从而得出一些有趣的结论。 【超人气老师】 二两 中国人工智能协会高级会员 生成对抗神经网络研究者 《深入浅出生成对抗网络:原理剖析与TensorFlow实现》一书作者 阿里云大学云学院导师 前大型游戏公司后端工程师 【超丰富实用案例】 0)图片背景去除案例 1)自动生成工作周报案例 2)豆瓣电影数据爬取案例 3)奥运会数据分析案例 4)自动处理邮件案例 5)github信息爬取/更新提醒案例 6)B站百大UP信息爬取与分析案例 7)构建自己的论文网站案例

Java8零基础入门视频教程

这门课程基于主流的java8平台,由浅入深的详细讲解了java SE的开发技术,可以使java方向的入门学员,快速扎实的掌握java开发技术!

Python数据挖掘简易入门

&nbsp; &nbsp; &nbsp; &nbsp; 本课程为Python数据挖掘方向的入门课程,课程主要以真实数据为基础,详细介绍数据挖掘入门的流程和使用Python实现pandas与numpy在数据挖掘方向的运用,并深入学习如何运用scikit-learn调用常用的数据挖掘算法解决数据挖掘问题,为进一步深入学习数据挖掘打下扎实的基础。

零基础学C#编程—C#从小白到大咖

本课程从初学者角度出发,提供了C#从入门到成为程序开发高手所需要掌握的各方面知识和技术。 【课程特点】 1 由浅入深,编排合理; 2 视频讲解,精彩详尽; 3 丰富实例,轻松易学; 4 每章总结配有难点解析文档。 15大章节,228课时,1756分钟与你一同进步!

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

多功能数字钟.zip

利用数字电子计数知识设计并制作的数字电子钟(含multisim仿真),该数字钟具有显示星期、24小时制时间、闹铃、整点报时、时间校准功能

极简JAVA学习营第四期(报名以后加助教微信:eduxy-1)

想学好JAVA必须要报两万的培训班吗? Java大神勿入 如果你: 零基础想学JAVA却不知道从何入手 看了一堆书和视频却还是连JAVA的环境都搭建不起来 囊中羞涩面对两万起的JAVA培训班不忍直视 在职没有每天大块的时间专门学习JAVA 那么恭喜你找到组织了,在这里有: 1. 一群志同道合立志学好JAVA的同学一起学习讨论JAVA 2. 灵活机动的学习时间完成特定学习任务+每日编程实战练习 3. 热心助人的助教和讲师及时帮你解决问题,不按时完成作业小心助教老师的家访哦 上一张图看看前辈的感悟: &nbsp; &nbsp; 大家一定迫不及待想知道什么是极简JAVA学习营了吧,下面就来给大家说道说道: 什么是极简JAVA学习营? 1. 针对Java小白或者初级Java学习者; 2. 利用9天时间,每天1个小时时间; 3.通过 每日作业 / 组队PK / 助教答疑 / 实战编程 / 项目答辩 / 社群讨论 / 趣味知识抢答等方式让学员爱上学习编程 , 最终实现能独立开发一个基于控制台的‘库存管理系统’ 的学习模式 极简JAVA学习营是怎么学习的? &nbsp; 如何报名? 只要购买了极简JAVA一:JAVA入门就算报名成功! &nbsp;本期为第四期极简JAVA学习营,我们来看看往期学员的学习状态: 作业看这里~ &nbsp; 助教的作业报告是不是很专业 不交作业打屁屁 助教答疑是不是很用心 &nbsp; 有奖抢答大家玩的很嗨啊 &nbsp; &nbsp; 项目答辩终于开始啦 &nbsp; 优秀者的获奖感言 &nbsp; 这是答辩项目的效果 &nbsp; &nbsp; 这么细致的服务,这么好的氛围,这样的学习效果,需要多少钱呢? 不要1999,不要199,不要99,只要9.9 是的你没听错,只要9.9以上所有就都属于你了 如果你: 1、&nbsp;想学JAVA没有基础 2、&nbsp;想学JAVA没有整块的时间 3、&nbsp;想学JAVA没有足够的预算 还等什么?赶紧报名吧,抓紧抢位,本期只招300人,错过只有等时间待定的下一期了 &nbsp; 报名请加小助手微信:eduxy-1 &nbsp; &nbsp;

Python可以这样学(第一季:Python内功修炼)

董付国系列教材《Python程序设计基础》、《Python程序设计(第2版)》、《Python可以这样学》配套视频,讲解Python 3.5.x和3.6.x语法、内置对象用法、选择与循环以及函数设计与使用、lambda表达式用法、字符串与正则表达式应用、面向对象编程、文本文件与二进制文件操作、目录操作与系统运维、异常处理结构。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

机器学习实战系列套餐(必备基础+经典算法+案例实战)

机器学习实战系列套餐以实战为出发点,帮助同学们快速掌握机器学习领域必备经典算法原理并结合Python工具包进行实战应用。建议学习顺序:1.Python必备工具包:掌握实战工具 2.机器学习算法与实战应用:数学原理与应用方法都是必备技能 3.数据挖掘实战:通过真实数据集进行项目实战。按照下列课程顺序学习即可! 课程风格通俗易懂,用最接地气的方式带领大家轻松进军机器学习!提供所有课程代码,PPT与实战数据,有任何问题欢迎随时与我讨论。

Java面试题大全(2020版)

发现网上很多Java面试题都没有答案,所以花了很长时间搜集整理出来了这套Java面试题大全,希望对大家有帮助哈~ 本套Java面试题大全,全的不能再全,哈哈~ 一、Java 基础 1. JDK 和 JRE 有什么区别? JDK:Java Development Kit 的简称,java 开发工具包,提供了 java 的开发环境和运行环境。 JRE:Java Runtime Environ...

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

深度学习原理+项目实战+算法详解+主流框架(套餐)

深度学习系列课程从深度学习基础知识点开始讲解一步步进入神经网络的世界再到卷积和递归神经网络,详解各大经典网络架构。实战部分选择当下最火爆深度学习框架PyTorch与Tensorflow/Keras,全程实战演示框架核心使用与建模方法。项目实战部分选择计算机视觉与自然语言处理领域经典项目,从零开始详解算法原理,debug模式逐行代码解读。适合准备就业和转行的同学们加入学习! 建议按照下列课程顺序来进行学习 (1)掌握深度学习必备经典网络架构 (2)深度框架实战方法 (3)计算机视觉与自然语言处理项目实战。(按照课程排列顺序即可)

HoloLens2开发入门教程

本课程为HoloLens2开发入门教程,讲解部署开发环境,安装VS2019,Unity版本,Windows SDK,创建Unity项目,讲解如何使用MRTK,编辑器模拟手势交互,打包VS工程并编译部署应用到HoloLens上等。

几率大的Redis面试题(含答案)

本文的面试题如下: Redis 持久化机制 缓存雪崩、缓存穿透、缓存预热、缓存更新、缓存降级等问题 热点数据和冷数据是什么 Memcache与Redis的区别都有哪些? 单线程的redis为什么这么快 redis的数据类型,以及每种数据类型的使用场景,Redis 内部结构 redis的过期策略以及内存淘汰机制【~】 Redis 为什么是单线程的,优点 如何解决redis的并发竞争key问题 Red...

MFC一站式终极全套课程包

该套餐共包含从C小白到C++到MFC的全部课程,整套学下来绝对成为一名C++大牛!!!

【数据结构与算法综合实验】欢乐连连看(C++ & MFC)案例

这是武汉理工大学计算机学院数据结构与算法综合实验课程的第三次项目:欢乐连连看(C++ & MFC)迭代开发代码。运行环境:VS2017。已经实现功能:开始游戏、消子、判断胜负、提示、重排、计时、帮助。

YOLOv3目标检测实战:训练自己的数据集

YOLOv3是一种基于深度学习的端到端实时目标检测方法,以速度快见长。本课程将手把手地教大家使用labelImg标注和使用YOLOv3训练自己的数据集。课程分为三个小项目:足球目标检测(单目标检测)、梅西目标检测(单目标检测)、足球和梅西同时目标检测(两目标检测)。 本课程的YOLOv3使用Darknet,在Ubuntu系统上做项目演示。包括:安装Darknet、给自己的数据集打标签、整理自己的数据集、修改配置文件、训练自己的数据集、测试训练出的网络模型、性能统计(mAP计算和画出PR曲线)和先验框聚类。 Darknet是使用C语言实现的轻型开源深度学习框架,依赖少,可移植性好,值得深入探究。 除本课程《YOLOv3目标检测实战:训练自己的数据集》外,本人推出了有关YOLOv3目标检测的系列课程,请持续关注该系列的其它课程视频,包括: 《YOLOv3目标检测实战:交通标志识别》 《YOLOv3目标检测:原理与源码解析》 《YOLOv3目标检测:网络模型改进方法》 敬请关注并选择学习!

u-boot-2015.07.tar.bz2

uboot-2015-07最新代码,喜欢的朋友请拿去

相关热词 c# 局部 截图 页面 c#实现简单的文件管理器 c# where c# 取文件夹路径 c# 对比 当天 c# fir 滤波器 c# 和站 队列 c# txt 去空格 c#移除其他类事件 c# 自动截屏
立即提问