在tensorflow的使用中,from tensorflow.examples.tutorials.mnist import input_data报错

最近在学习使用python的tensorflow的使用,使用编辑器为spyder,在输入以下代码时会报错:

from tensorflow.examples.tutorials.mnist import input_data

报错内容如下:

from tensorflow.python.autograph.lang.special_functions import stack

ImportError: cannot import name 'stack'

1个回答

autograph是tf新增加的,你的tf版本是不是太旧了。

weixin_43998834
weixin_43998834 我的tensorflow版本是1.12.0
一年多之前 回复
Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
ImportError:cannot import name 'cloud' from 'tensorflow.contrib'求助

使用Tensorflow Object_Detection, 配好了Tensorflow 1.14.0 和 Protocobuf 3.10.0 然后路径也配好了,就是运行测试文件时会报错 ImportError:cannot import name 'cloud' from 'tensorflow.contrib' 请问各位大神,这是缺少了什么库?

tensorflow环境下只要import keras 就会出现python已停止运行?

python小白在写代码的时候发现只要import keras就会出现python停止运行的情况,目前tensorflow版本1.2.1,keras版本2.1.1,防火墙关了也还是这样,具体代码和问题信息如下,请大神赐教。 ``` # -*- coding: utf-8 -*- import numpy as np from scipy.io import loadmat, savemat from keras.utils import np_utils 问题事件名称: BEX64 应用程序名: pythonw.exe 应用程序版本: 3.6.2150.1013 应用程序时间戳: 5970e8ca 故障模块名称: StackHash_1dc2 故障模块版本: 0.0.0.0 故障模块时间戳: 00000000 异常偏移: 0000000000000000 异常代码: c0000005 异常数据: 0000000000000008 OS 版本: 6.1.7601.2.1.0.256.1 区域设置 ID: 2052 其他信息 1: 1dc2 其他信息 2: 1dc22fb1de37d348f27e54dbb5278e7d 其他信息 3: eae3 其他信息 4: eae36a4b5ffb27c9d33117f4125a75c2 ```

python3.7中的tensorflow2.0模块没有的问题。

小白刚做手写字识别,遇到tensorflow导入模块的一些问题,模块ModuleNotFoundError: No module named 'tensorflow.examples.tutorials'不会解决。 import keras # 导入Keras import numpy as np from keras.datasets import mnist # 从keras中导入mnist数据集 from keras.models import Sequential # 导入序贯模型 from keras.layers import Dense # 导入全连接层 from keras.optimizers import SGD # 导入优化函数 from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data", one_hot = True) ![图片说明](https://img-ask.csdn.net/upload/201911/17/1573957701_315782.png) 在网上找了好久,也不怎么懂,能告诉我详实点的解决办法。

在训练Tensorflow模型(object_detection)时,训练在第一次评估后退出,怎么使训练继续下去?

当我进行ssd模型训练时,训练进行了10分钟,然后进入评估阶段,评估之后程序就自动退出了,没有看到误和警告,这是为什么,怎么让程序一直训练下去? 训练命令: ``` python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr ``` 配置文件: ``` training exit after the first evaluation(only one evaluation) in Tensorflow model(object_detection) without error and waring System information What is the top-level directory of the model you are using:models/research/object_detection/ Have I written custom code (as opposed to using a stock example script provided in TensorFlow):NO OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Windows-10(64bit) TensorFlow installed from (source or binary):conda install tensorflow-gpu TensorFlow version (use command below):1.13.1 Bazel version (if compiling from source):N/A CUDA/cuDNN version:cudnn-7.6.0 GPU model and memory:GeForce GTX 1060 6GB Exact command to reproduce:See below my command for training : python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr This is my config : train_config { batch_size: 24 data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { ssd_random_crop { } } optimizer { rms_prop_optimizer { learning_rate { exponential_decay_learning_rate { initial_learning_rate: 0.00400000018999 decay_steps: 800720 decay_factor: 0.949999988079 } } momentum_optimizer_value: 0.899999976158 decay: 0.899999976158 epsilon: 1.0 } } fine_tune_checkpoint: "D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt" from_detection_checkpoint: true num_steps: 200000 train_input_reader { label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt" tf_record_input_reader { input_path: "D:/gitcode/models/research/object_detection/idol/train/Iframe_??????.tfrecord" } } eval_config { num_examples: 8000 max_evals: 10 use_moving_averages: false } eval_input_reader { label_map_path: "D:/gitcode/models/research/object_detection/idol/tf_label_map.pbtxt" shuffle: false num_readers: 1 tf_record_input_reader { input_path: "D:/gitcode/models/research/object_detection/idol/eval/Iframe_??????.tfrecord" } ``` 窗口输出: (default) D:\gitcode\models\research>python object_detection/model_main.py --pipeline_config_path=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/pipeline.config --model_dir=D:/gitcode/models/research/object_detection/ssd_mobilenet_v1_coco_2018_01_28/saved_model --num_train_steps=50000 --alsologtostderr WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md https://github.com/tensorflow/addons If you depend on functionality not listed there, please file an issue. WARNING:tensorflow:Forced number of epochs for all eval validations to be 1. WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered eval_on_train_input_config.num_epochs = 0. Overwriting num_epochs to 1. WARNING:tensorflow:Estimator's model_fn (<function create_model_fn..model_fn at 0x0000027CBAB7BB70>) includes params argument, but params are not passed to Estimator. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:86: parallel_interleave (from tensorflow.contrib.data.python.ops.interleave_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.experimental.parallel_interleave(...). WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\core\preprocessor.py:196: sample_distorted_bounding_box (from tensorflow.python.ops.image_ops_impl) is deprecated and will be removed in a future version. Instructions for updating: seed2 arg is deprecated.Use sample_distorted_bounding_box_v2 instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\builders\dataset_builder.py:158: batch_and_drop_remainder (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data.Dataset.batch(..., drop_remainder=True). WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\losses\losses_impl.py:448: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\ops\array_grad.py:425: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. 2019-08-14 16:29:31.607841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate(GHz): 1.7845 pciBusID: 0000:04:00.0 totalMemory: 6.00GiB freeMemory: 4.97GiB 2019-08-14 16:29:31.621836: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-08-14 16:29:32.275712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-14 16:29:32.283072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-08-14 16:29:32.288675: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-08-14 16:29:32.293514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1) WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\eval_util.py:796: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\object_detection-0.1-py3.7.egg\object_detection\utils\visualization_utils.py:498: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. Instructions for updating: tf.py_func is deprecated in TF V2. Instead, use tf.py_function, which takes a python function which manipulates tf eager tensors instead of numpy arrays. It's easy to convert a tf eager tensor to an ndarray (just call tensor.numpy()) but having access to eager tensors means tf.py_functions can use accelerators such as GPUs as well as being differentiable using a gradient tape. 2019-08-14 16:41:44.736212: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-08-14 16:41:44.741242: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-14 16:41:44.747522: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-08-14 16:41:44.751256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-08-14 16:41:44.755548: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4714 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:04:00.0, compute capability: 6.1) WARNING:tensorflow:From C:\Users\qian\Anaconda3\envs\default\lib\site-packages\tensorflow\python\training\saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. creating index... index created! creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=2.43s). Accumulating evaluation results... DONE (t=0.14s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.287 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.529 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.278 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.031 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.312 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.162 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.356 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.061 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.384 (default) D:\gitcode\models\research>

tensorflow上的一个案例mnist,运行出错,求问

from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf # Import data mnist = input_data.read_data_sets('MNIST_data/', one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784]) W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.matmul(x, W) + b # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10]) # The raw formulation of cross-entropy, # # tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.nn.softmax(y)), # reduction_indices=[1])) # # can be numerically unstable. # # So here we use tf.nn.softmax_cross_entropy_with_logits on the raw # outputs of 'y', and then average across the batch. cross_entropy = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) sess = tf.InteractiveSession() tf.global_variables_initializer().run() # Train for _ in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) 错误如下: Traceback (most recent call last): File "/home/linbinghui/文档/pycode/Text-1.py", line 5, in <module> mnist = input_data.read_data_sets('MNIST_data/', one_hot=True) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py", line 189, in read_data_sets local_file = maybe_download(TEST_IMAGES, train_dir, SOURCE_URL + TEST_IMAGES) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py", line 81, in m aybe_download urllib.request.urlretrieve(source_url, temp_file_name) File "/usr/lib/python2.7/urllib.py", line 98, in urlretrieve return opener.retrieve(url, filename, reporthook, data) File "/usr/lib/python2.7/urllib.py", line 245, in retrieve fp = self.open(url, data) File "/usr/lib/python2.7/urllib.py", line 213, in open return getattr(self, name)(url) File "/usr/lib/python2.7/urllib.py", line 364, in open_http return self.http_error(url, fp, errcode, errmsg, headers) File "/usr/lib/python2.7/urllib.py", line 377, in http_error result = method(url, fp, errcode, errmsg, headers) File "/usr/lib/python2.7/urllib.py", line 642, in http_error_302 headers, data) File "/usr/lib/python2.7/urllib.py", line 669, in redirect_internal return self.open(newurl) File "/usr/lib/python2.7/urllib.py", line 213, in open return getattr(self, name)(url) File "/usr/lib/python2.7/urllib.py", line 350, in open_http h.endheaders(data) File "/usr/lib/python2.7/httplib.py", line 1053, in endheaders self._send_output(message_body) File "/usr/lib/python2.7/httplib.py", line 897, in _send_output self.send(msg) File "/usr/lib/python2.7/httplib.py", line 859, in send self.connect() File "/usr/lib/python2.7/httplib.py", line 836, in connect self.timeout, self.source_address) File "/usr/lib/python2.7/socket.py", line 575, in create_connection raise err IOError: [Errno socket error] [Errno 111] Connection refused

tensorflow当中的loss里面的logits可不可以是placeholder

我使用tensorflow实现手写数字识别,我希望softmax_cross_entropy_with_logits里面的logits先用一个placeholder表示,然后在计算的时候再通过计算出的值再传给placeholder,但是会报错ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients。我知道直接把logits那里改成outputs就可以了,但是如果我一定要用logits的结果先是一个placeholder,我应该怎么解决呢。 ``` import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("/home/as/下载/resnet-152_mnist-master/mnist_dataset", one_hot=True) from tensorflow.contrib.layers import fully_connected x = tf.placeholder(dtype=tf.float32,shape=[None,784]) y = tf.placeholder(dtype=tf.float32,shape=[None,1]) hidden1 = fully_connected(x,100,activation_fn=tf.nn.elu, weights_initializer=tf.random_normal_initializer()) hidden2 = fully_connected(hidden1,200,activation_fn=tf.nn.elu, weights_initializer=tf.random_normal_initializer()) hidden3 = fully_connected(hidden2,200,activation_fn=tf.nn.elu, weights_initializer=tf.random_normal_initializer()) outputs = fully_connected(hidden3,10,activation_fn=None, weights_initializer=tf.random_normal_initializer()) a = tf.placeholder(tf.float32,[None,10]) loss = tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=a) reduce_mean_loss = tf.reduce_mean(loss) equal_result = tf.equal(tf.argmax(outputs,1),tf.argmax(y,1)) cast_result = tf.cast(equal_result,dtype=tf.float32) accuracy = tf.reduce_mean(cast_result) train_op = tf.train.AdamOptimizer(0.001).minimize(reduce_mean_loss) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(30000): xs,ys = mnist.train.next_batch(128) result = outputs.eval(feed_dict={x:xs}) sess.run(train_op,feed_dict={a:result,y:ys}) print(i) ```

Anaconda的tensorflow下spyder错误:程序停止 不显示异常

Anaconda的tensorflow下spyder错误 求助,为什么程序没问题 总是出现这种错误 做线性回归的时候是没问题的 debug是第一个图中圈出来的部分处产生的错误,求解 然后就会出现Kernel died,restaring (这个程序之前执行时没问题的 重装系统之后 重新安装的 再执行mnist的程序就会出现这个问题 其他的都可以执行) 我觉得是在tensorflow的使用中,from tensorflow.examples.tutorials.mnist import input_data报错 ![图片说明](https://img-ask.csdn.net/upload/201901/15/1547523875_882023.png)![图片说明](https://img-ask.csdn.net/upload/201901/15/1547523887_569028.png)![图片说明](https://img-ask.csdn.net/upload/201901/15/1547523915_431063.png)

java org.bouncycastle.crypto.examples.DESExample

# java运行出错:Usage: java org.bouncycastle.crypto.examples.DESExample infile outfile [keyfile] 环境:Eclipse+jdk8 导入包:commons-codec-1.8.jar 代码: package com.test.shautil; import java.security.MessageDigest; import org.apache.commons.codec.binary.Hex; import org.apache.commons.codec.digest.DigestUtils; public class shatest { /** * commons-codec实现SHA1加密 * @param message */ public static void SHA1(String message) { System.out.println("SHA1加密后为"+DigestUtils.sha1Hex(message)); } /** * commons-codec实现SHA256加密 * @param message */ public static void SHA256(String message) { MessageDigest ccSHA256=DigestUtils.getSha256Digest(); byte[] byteFinal=ccSHA256.digest(message.getBytes()); System.out.println("SHA256加密后为"+Hex.encodeHexString(byteFinal)); } public static void main(String[] args) { // TODO Auto-generated method stub String message="jevirs123"; SHA1(message); SHA256(message); SHA384(message); SHA512(message); } }

TensorFlow中with as语句的使用问题

A session may own resources, such as variables, queues, and readers. It is important to release these resources when they are no longer required. with as语句在资源不再使用后才能使用,但是我在mnist初步入门的那个代码中使用后却报了一大堆错误,这是为啥? import tensorflow.examples.tutorials.mnist.input_data as input_data import tensorflow as tf mnist = input_data.read_data_sets('MNIST_data/',one_hot = True) #mnist.train #mnist.test #mnist.train.images [60000,784] #mnist.train.labels [60000,10] x = tf.placeholder(tf.float32,[None,784]) w = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x,w)+b) y_ = tf.placeholder("float",[None,10]) cross_entropy = -tf.reduce_sum(y_*tf.log(y)) train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) for i in range(1000): batch_xs,batch_ys = mnist.train.next_batch(100) sess.run(train_step,feed_dict = {x:batch_xs,y_:batch_ys}) correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float")) _output = sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}) print(output) sess.close() _ 斜体部分 我用with tf.Session() as sess:…………代替后报错了, 是不是因为之前用了sess = tf.Sesssion()的缘故?? 还有 , 像在上面代码中多次用到sess.run(),是不是不能随便用with as语句了呢?

Tensorflow Object Detection API使用,不训练可以修改pipeline.config文件吗?

在使用这个API的时候,我下载了github上的 _faster_rcnn_inception_v2_coco_2018_01_28 这个模型。 现在我用这个模型测试自己的图片,但是我想对这个模型的pipeline.config文件进行一些调整,比如说:将momentum_optimizer 改为adam这种,以及调整iou阈值这种参数。 我不想再次训练一个模型,有没有什么是不需要训练模型,可以调整模型的config文件的方法? ![图片说明](https://img-ask.csdn.net/upload/202002/29/1582972058_335720.png)

Tensorflow 多GPU并行训练,模型收敛su'du'ma

在使用多GPU并行训练深度学习神经网络时,以TFrecords 形式读取MNIST数据的训练数据进行训练,发现比直接用MNIST训练数据训练相同模型时,发现前者收敛速度慢和运行时间长,已知模型没有问题,想请大神帮忙看看是什么原因导致运行速度慢,运行时间长 ``` import os import time import numpy as np import tensorflow as tf from datetime import datetime import tensorflow.compat.v1 as v1 from tensorflow.examples.tutorials.mnist import input_data BATCH_SIZE = 100 LEARNING_RATE = 1e-4 LEARNING_RATE_DECAY = 0.99 REGULARZTION_RATE = 1e-4 EPOCHS = 10000 MOVING_AVERAGE_DECAY = 0.99 N_GPU = 2 MODEL_SAVE_PATH = r'F:\model\log_dir' MODEL_NAME = 'model.ckpt' TRAIN_PATH = r'F:\model\threads_file\MNIST_data_tfrecords\train.tfrecords' TEST_PATH = r'F:\model\threads_file\MNIST_data_tfrecords\test.tfrecords' def __int64_feature(value): return v1.train.Feature(int64_list=v1.train.Int64List(value=[value])) def __bytes_feature(value): return v1.train.Feature(bytes_list=v1.train.BytesList(value=[value])) def creat_tfrecords(path, data, labels): writer = tf.io.TFRecordWriter(path) for i in range(len(data)): image = data[i].tostring() label = labels[i] examples = v1.train.Example(features=v1.train.Features(feature={ 'image': __bytes_feature(image), 'label': __int64_feature(label) })) writer.write(examples.SerializeToString()) writer.close() def parser(record): features = v1.parse_single_example(record, features={ 'image': v1.FixedLenFeature([], tf.string), 'label': v1.FixedLenFeature([], tf.int64) }) image = tf.decode_raw(features['image'], tf.uint8) image = tf.reshape(image, [28, 28, 1]) image = tf.cast(image, tf.float32) label = tf.cast(features['label'], tf.int32) label = tf.one_hot(label, 10, on_value=1, off_value=0) return image, label def get_input(batch_size, path): dataset = tf.data.TFRecordDataset([path]) dataset = dataset.map(parser) dataset = dataset.shuffle(10000) dataset = dataset.repeat(100) dataset = dataset.batch(batch_size) iterator = dataset.make_one_shot_iterator() image, label = iterator.get_next() return image, label def model_inference(images, labels, rate, regularzer=None, reuse_variables=None): with v1.variable_scope(v1.get_variable_scope(), reuse=reuse_variables): with tf.compat.v1.variable_scope('First_conv'): w1 = tf.compat.v1.get_variable('weights', [3, 3, 1, 32], tf.float32, initializer=tf.compat.v1.truncated_normal_initializer(stddev=0.1)) if regularzer: tf.add_to_collection('losses', regularzer(w1)) b1 = tf.compat.v1.get_variable('biases', [32], tf.float32, initializer=tf.compat.v1.constant_initializer(0.1)) activation1 = tf.nn.relu(tf.nn.conv2d(images, w1, strides=[1, 1, 1, 1], padding='SAME') + b1) out1 = tf.nn.max_pool2d(activation1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') with tf.compat.v1.variable_scope('Second_conv'): w2 = tf.compat.v1.get_variable('weight', [3, 3, 32, 64], tf.float32, initializer=tf.compat.v1.truncated_normal_initializer(stddev=0.1)) if regularzer: tf.add_to_collection('losses', regularzer(w2)) b2 = tf.compat.v1.get_variable('biases', [64], tf.float32, initializer=tf.compat.v1.constant_initializer(0.1)) activation2 = tf.nn.relu(tf.nn.conv2d(out1, w2, strides=[1, 1, 1, 1], padding='SAME') + b2) out2 = tf.nn.max_pool2d(activation2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') out3 = tf.reshape(out2, [-1, 7*7*64], name='flatten') with tf.compat.v1.variable_scope('FC_1'): w3 = tf.compat.v1.get_variable('weight', [7*7*64, 1024], tf.float32, initializer=tf.compat.v1.truncated_normal_initializer(stddev=0.1)) if regularzer: tf.add_to_collection('losses', regularzer(w3)) b3 = tf.compat.v1.get_variable('biases', [1024], tf.float32, initializer=tf.compat.v1.constant_initializer(0.1)) activation3 = tf.nn.relu(tf.matmul(out3, w3) + b3) out4 = tf.nn.dropout(activation3, keep_prob=rate) with tf.compat.v1.variable_scope('FC_2'): w4 = tf.compat.v1.get_variable('weight', [1024, 10], tf.float32, initializer=tf.compat.v1.truncated_normal_initializer(stddev=0.1)) if regularzer: tf.add_to_collection('losses', regularzer(w4)) b4 = tf.compat.v1.get_variable('biases', [10], tf.float32, initializer=tf.compat.v1.constant_initializer(0.1)) output = tf.nn.softmax(tf.matmul(out4, w4) + b4) with tf.compat.v1.variable_scope('Loss_entropy'): if regularzer: loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(labels, 1), logits=output)) \ + tf.add_n(tf.get_collection('losses')) else: loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=tf.argmax(labels, 1), logits=output)) with tf.compat.v1.variable_scope('Accuracy'): correct_data = tf.equal(tf.math.argmax(labels, 1), tf.math.argmax(output, 1)) accuracy = tf.reduce_mean(tf.cast(correct_data, tf.float32, name='accuracy')) return output, loss, accuracy def average_gradients(tower_grads): average_grads = [] for grad_and_vars in zip(*tower_grads): grads = [] for g, v2 in grad_and_vars: expanded_g = tf.expand_dims(g, 0) grads.append(expanded_g) grad = tf.concat(grads, 0) grad = tf.reduce_mean(grad, 0) v = grad_and_vars[0][1] grad_and_var = (grad, v) average_grads.append(grad_and_var) return average_grads def main(argv=None): with tf.Graph().as_default(), tf.device('/cpu:0'): x, y = get_input(batch_size=BATCH_SIZE, path=TRAIN_PATH) regularizer = tf.contrib.layers.l2_regularizer(REGULARZTION_RATE) global_step = v1.get_variable('global_step', [], initializer=v1.constant_initializer(0), trainable=False) lr = v1.train.exponential_decay(LEARNING_RATE, global_step, 55000/BATCH_SIZE, LEARNING_RATE_DECAY) opt = v1.train.AdamOptimizer(lr) tower_grads = [] reuse_variables = False device = ['/gpu:0', '/cpu:0'] for i in range(len(device)): with tf.device(device[i]): with v1.name_scope(device[i][1:4] + '_0') as scope: out, cur_loss, acc = model_inference(x, y, 0.3, regularizer, reuse_variables) reuse_variables = True grads = opt.compute_gradients(cur_loss) tower_grads.append(grads) grads = average_gradients(tower_grads) for grad, var in grads: if grad is not None: v1.summary.histogram('gradients_on_average/%s' % var.op.name, grad) apply_gradient_op = opt.apply_gradients(grads, global_step) for var in v1.trainable_variables(): tf.summary.histogram(var.op.name, var) variable_averages = v1.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step) variable_to_average = (v1.trainable_variables() + v1.moving_average_variables()) variable_averages_op = variable_averages.apply(variable_to_average) train_op = tf.group(apply_gradient_op, variable_averages_op) saver = v1.train.Saver(max_to_keep=1) summary_op = v1.summary.merge_all() # merge_all 可以将所有summary全部保存到磁盘 init = v1.global_variables_initializer() with v1.Session(config=v1.ConfigProto(allow_soft_placement=True, log_device_placement=True)) as sess: init.run() summary_writer = v1.summary.FileWriter(MODEL_SAVE_PATH, sess.graph) # 指定一个文件用来保存图 for step in range(EPOCHS): try: start_time = time.time() _, loss_value, out_value, acc_value = sess.run([train_op, cur_loss, out, acc]) duration = time.time() - start_time if step != 0 and step % 100 == 0: num_examples_per_step = BATCH_SIZE * N_GPU examples_per_sec = num_examples_per_step / duration sec_per_batch = duration / N_GPU format_str = '%s: step %d, loss = %.2f(%.1f examples/sec; %.3f sec/batch), accuracy = %.2f' print(format_str % (datetime.now(), step, loss_value, examples_per_sec, sec_per_batch, acc_value)) summary = sess.run(summary_op) summary_writer.add_summary(summary, step) if step % 100 == 0 or (step + 1) == EPOCHS: checkpoint_path = os.path.join(MODEL_SAVE_PATH, MODEL_NAME) saver.save(sess, checkpoint_path, global_step=step) except tf.errors.OutOfRangeError: break if __name__ == '__main__': tf.app.run() ```

(支付宝转账悬赏100C币)bazel编译tensorflow的安卓demo报错:*.h(头文件)找不到?

![图片说明](https://img-ask.csdn.net/upload/201711/23/1511421590_23793.png) 报错信息如截图,编译命令是 “bazel build //tensorflow/examples/android:tensorflow_demo” 有时也会报fatal error: 'cuda/include/cuda.h' file not found fatal error: 'cuda_runtime.h' file not found,求助大神指点。 另外我用命令"sudo bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package" 可以成功编译tensorflow,并生成*.whl的tensorflow源码安装包,也可成功安装tensorflow。 软件版本:最新的 tensorflow-1.4.0以及ubuntu14.04,cuda8.0,cudnn5.1,ndk-r14,bazel-0.5.4,jdk8,nvidia-375

tensorflow RNN LSTM代码运行不正确?

报错显示是ValueError: None values not supported. 在cross_entropy处有问题。谢谢大家 ``` #7.2 RNN import tensorflow as tf #tf.reset_default_graph() from tensorflow.examples.tutorials.mnist import input_data #载入数据集 mnist = input_data.read_data_sets("MNIST_data/", one_hot = True) #输入图片是28*28 n_inputs = 28 #输入一行,一行有28个数据 max_time = 28 #一共28行 lstm_size = 100 #隐层单元 n_classes = 10 #10个分量 batch_size = 50 #每批次50个样本 n_batch = mnist.train.num_examples//batch_size #计算共由多少个批次 #这里的none表示第一个维度可以是任意长度 x = tf.placeholder(tf.float32, [batch_size, 784]) #正确的标签 y = tf.placeholder(tf.float32, [batch_size, 10]) #初始化权值 weights = tf.Variable(tf.truncated_normal([lstm_size, n_classes], stddev = 0.1)) #初始化偏置 biases = tf.Variable(tf.constant(0.1, shape = [n_classes])) #定义RNN网络 def RNN(X, weights, biases): #input = [batch_size, max_size, n_inputs] inputs = tf.reshape(X, [-1, max_time, n_inputs]) #定义LSTM基本CELL lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(lstm_size) #final_state[0]是cell_state #final_state[1]是hidden_state outputs, final_state = tf.nn.dynamic_rnn(lstm_cell, inputs, dtype = tf.float32) results = tf.nn.softmax(tf.matmul(final_state[1], weights) + biases) #计算RNN的返回结果 prediction = RNN(x, weights, biases) #损失函数 cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels = y,logits = prediction)) #使用AdamOptimizer进行优化 train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) #结果存放在一个布尔型列表中 correct_prediction = tf.equal(tf.argmax(y, 1),tf.argmax(prediction, 1)) #求准确率 accuracy = tf.reduce_mean(tf.cast(correct_precdition,tf.float32)) #初始化 init = tf.global_variable_initializer() with tf.Session() as sess: sess.run(init) for epoch in range(6): for batch in range(n_batch): batch_xs,batch_ys=mnist.train.next_batch(batch_size) sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys}) acc = sess.run(accuracy, feed_dict={x:mnist.test.images,y:mnist.test.labels}) print('Iter' + str(epoch) + ',Testing Accuracy = ' + str(acc)) ```

在运行tensorflow MNIST 里的例子时报错

/tensorflow-master/tensorflow/examples/tutorials/mnist$ python fully_connected_feed.py /usr/local/lib/python2.7/dist-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20. "This module will be removed in 0.20.", DeprecationWarning) Traceback (most recent call last): File "fully_connected_feed.py", line 277, in <module> tf.app.run(main=main, argv=[sys.argv[0]] + unparsed) TypeError: run() got an unexpected keyword argument 'argv' 我是从GITHUB上下载的包,代码也没改,运行的fully_connceted_feed.py时报错

使用tesorflow中model_main.py遇到的问题!

直接上代码了 ``` D:\tensorflow\models\research\object_detection>python model_main.py --pipeline_config_path=E:\python_demo\pedestrian_demo\pedestrian_train\models\pipeline.config --model_dir=E:\python_demo\pedestrian_demo\pedestrian_train\models\train --num_train_steps=5000 --sample_1_of_n_eval_examples=1 --alsologstderr Traceback (most recent call last): File "model_main.py", line 109, in <module> tf.app.run() File "C:\anaconda\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run _sys.exit(main(argv)) File "model_main.py", line 71, in main FLAGS.sample_1_of_n_eval_on_train_examples)) File "D:\ssd-detection\models-master\research\object_detection\model_lib.py", line 589, in create_estimator_and_inputs pipeline_config_path, config_override=config_override) File "D:\ssd-detection\models-master\research\object_detection\utils\config_util.py", line 98, in get_configs_from_pipeline_file text_format.Merge(proto_str, pipeline_config) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 574, in Merge descriptor_pool=descriptor_pool) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 631, in MergeLines return parser.MergeLines(lines, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 654, in MergeLines self._ParseOrMerge(lines, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 676, in _ParseOrMerge self._MergeField(tokenizer, message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 801, in _MergeField merger(tokenizer, message, field) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 875, in _MergeMessageField self._MergeField(tokenizer, sub_message) File "C:\anaconda\lib\site-packages\google\protobuf\text_format.py", line 768, in _MergeField (message_descriptor.full_name, name)) google.protobuf.text_format.ParseError: 35:7 : Message type "object_detection.protos.SsdFeatureExtractor" has no field named "batch_norm_trainable". ``` 这个错误怎么解决,求大神指导~

tensorflow在导入时直接报错,错误37

请问,AVX已经编译成功,但是在本电脑上不能使用,这是为什么? ``` In [1]: import tensorflow 2018-11-16 14:05:15.760738: F tensorflow/core/platform/cpu_feature_guard.cc:37] The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine. 已放弃 (核心已转储) ``` 之后我也更换了多个版本,但是报错一样。

郑泽宇出版的tensorflow中minst代码的错误

from tensorflow.examples.tutorials.mnist import input_data mnist=input_data.read_data_sets('/path/to/MNIST_data/',one_hot=True) print('Training data size:',mnist.train.num_examples) print('Validating data size:',mnist.validation.num_examples) print('Testing data size:',mnist.test.num_example) print('Example training data:',mnist.train.images[0]) print('Example training data label:',mnist.train.labels[0]) 显示错误为:ERROR! Session/line number was not unique in database. History logging moved to new session 52

tensorflow代码错误提问

import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data',one_hot=True) def compute_accuracy(v_xs,v_ys): global prediction y_pre = sess.run(prediction,feed_dict={xs:v_xs}) correct_prediction = tf.equal(tf.argmax(y_pre,1),tf.argmax(v_ys,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) result = sess.run(accuracy,feed_dict={xs:v_xs,ys:v_ys}) return result def weight_variable(shape): initial = tf.truncated_normal(shape,stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1,shape = shape) return tf.Variable(initial) def conv2d(x,W): #strides=[1,x_movement,y_movement,1] #Must have strides[0]=strides[3]=1 return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME') #define placeholder for inputs to network xs = tf.placeholder(tf.float32,[None,784]) #28*28 ys = tf.placeholder(tf.float32,[None,10]) keep_prob = tf.placeholder(tf.float32) x_image = tf.reshape(xs,[-1,28,28,1]) #print(x_image.shape) #[n_samples,28,28,1] ##conv1 layer## W_conv1 = weight_variable([5,5,1,32])#patch 5*5,in size 1,out size 32 b_conv1 = bias_variable([32]) h_conv1 = tf.nn.relu(conv2d(x_image,W_conv1)+b_conv1)#output size 28*28*32 h_pool1 = max_pool_2x2(h_conv1)#output size 14*14*32 ##conv2 layer## W_conv2 = weight_variable([5,5,32,64])#patch 5*5,in size 32,out size 64 b_conv2 = bias_variable([64]) h_conv2 = tf.nn.relu(conv2d(h_pool1,W_conv2)+b_conv2)#output size 14*14*64 h_pool2 = max_pool_2x2(h_conv2)#output size 7*7*64 ##func1 layer## W_fc1 = weight_variable([7*7*64,1024]) b_fc1 = bias_variable([1024]) h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64])#[n_samples,7,7,64] ->> [n_samples,7*7*64] h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1)+b_fc1) h_fc1_drop = tf.nn.dropout(h_fc1,keep_prob) ##func2 layer## W_fc2 = weight_variable([1024,10]) b_fc2 = bias_variable([10]) prediction = tf.nn.softmax(tf.matmul(h_fc1_drop,W_fc2)+b_fc2) #the error between prediction and the real data cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys*tf.log(prediction),reduction_indices=[1])) #loss train_step = tf.train.AdamOptimizer(1e-4).minimize((cross_entropy)) sess = tf.Session() sess.run(tf.global_variables_initializer()) for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step,feed_dict={xs:batch_xs,ys:batch_ys}) if i % 50 == 0: print(compute_accuracy(mnist.test.images,mnist.test.labels)) 上述是代码 也是我学来的 但是却在报错Traceback (most recent call last): File "D:/aibuild/CNN.py", line 68, in <module> sess.run(train_step,feed_dict={xs:batch_xs,ys:batch_ys}) File "D:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 900, in run run_metadata_ptr) File "D:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1135, in _run feed_dict_tensor, options, run_metadata) File "D:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1316, in _do_run run_metadata) File "D:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1335, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_2' with dtype float [[Node: Placeholder_2 = Placeholder[dtype=DT_FLOAT, shape=<unknown>, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] Caused by op 'Placeholder_2', defined at: File "D:/aibuild/CNN.py", line 32, in <module> keep_prob = tf.placeholder(tf.float32) File "D:\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1808, in placeholder return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name) File "D:\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 5835, in placeholder "Placeholder", dtype=dtype, shape=shape, name=name) File "D:\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper op_def=op_def) File "D:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3392, in create_op op_def=op_def) File "D:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1718, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder_2' with dtype float [[Node: Placeholder_2 = Placeholder[dtype=DT_FLOAT, shape=<unknown>, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] Process finished with exit code 1 求大佬解决

tensorflow在Ubuntu16.04下训练问题

tensorflow.python.framework.errors_impl.NotFoundError: /opt/tensorflow/bazel-bin/tensorflow/examples/image_retraining/retrain.runfiles/org_tensorflow/tensorflow/contrib/data/python/ops/../../_prefetching_ops.so: undefined symbol: _ZN6google8protobuf8internal26fixed_address_empty_stringB5cxx11E 对于这个问题没有任何想法,希望有好心人能提点建议

linux下利用/proc进行进程树的打印

在linux下利用c语言实现的进程树的打印,主要通过/proc下的目录中的进程文件,获取status中的进程信息内容,然后利用递归实现进程树的打印

设计模式(JAVA语言实现)--20种设计模式附带源码

课程亮点: 课程培训详细的笔记以及实例代码,让学员开始掌握设计模式知识点 课程内容: 工厂模式、桥接模式、组合模式、装饰器模式、外观模式、享元模式、原型模型、代理模式、单例模式、适配器模式 策略模式、模板方法模式、观察者模式、迭代器模式、责任链模式、命令模式、备忘录模式、状态模式、访问者模式 课程特色: 笔记设计模式,用笔记串连所有知识点,让学员从一点一滴积累,学习过程无压力 笔记标题采用关键字标识法,帮助学员更加容易记住知识点 笔记以超链接形式让知识点关联起来,形式知识体系 采用先概念后实例再应用方式,知识点深入浅出 提供授课内容笔记作为课后复习以及工作备查工具 部分图表(电脑PC端查看):

Python数据分析与挖掘

92讲视频课+16大项目实战+源码+¥800元课程礼包+讲师社群1V1答疑+社群闭门分享会=99元 &nbsp; 为什么学习数据分析? &nbsp; &nbsp; &nbsp; 人工智能、大数据时代有什么技能是可以运用在各种行业的?数据分析就是。 &nbsp; &nbsp; &nbsp; 从海量数据中获得别人看不见的信息,创业者可以通过数据分析来优化产品,营销人员可以通过数据分析改进营销策略,产品经理可以通过数据分析洞察用户习惯,金融从业者可以通过数据分析规避投资风险,程序员可以通过数据分析进一步挖掘出数据价值,它和编程一样,本质上也是一个工具,通过数据来对现实事物进行分析和识别的能力。不管你从事什么行业,掌握了数据分析能力,往往在其岗位上更有竞争力。 &nbsp;&nbsp; 本课程共包含五大模块: 一、先导篇: 通过分析数据分析师的一天,让学员了解全面了解成为一个数据分析师的所有必修功法,对数据分析师不在迷惑。 &nbsp; 二、基础篇: 围绕Python基础语法介绍、数据预处理、数据可视化以及数据分析与挖掘......这些核心技能模块展开,帮助你快速而全面的掌握和了解成为一个数据分析师的所有必修功法。 &nbsp; 三、数据采集篇: 通过网络爬虫实战解决数据分析的必经之路:数据从何来的问题,讲解常见的爬虫套路并利用三大实战帮助学员扎实数据采集能力,避免没有数据可分析的尴尬。 &nbsp; 四、分析工具篇: 讲解数据分析避不开的科学计算库Numpy、数据分析工具Pandas及常见可视化工具Matplotlib。 &nbsp; 五、算法篇: 算法是数据分析的精华,课程精选10大算法,包括分类、聚类、预测3大类型,每个算法都从原理和案例两个角度学习,让你不仅能用起来,了解原理,还能知道为什么这么做。

广工操作系统课程设计(文档+代码+可执行文件)

实现作业调度(先来先服务)、进程调度功能(时间片轮转) 实现内存管理功能(连续分配)。 实现文件系统功能(选作) 这些功能要有机地连接起来

Only老K说-爬取妹子图片(简单入门)

安装第三方请求库 requests 被网站禁止了访问 原因是我们是Python过来的 重新给一段 可能还是存在用不了,使用网页的 编写代码 上面注意看匹配内容 User-Agent:请求对象 AppleWebKit:请求内核 Chrome浏览器 //请求网页 import requests import re //正则表达式 就是去不规则的网页里面提取有规律的信息 headers = { 'User-Agent':'存放浏览器里面的' } response = requests.get

linux“开发工具三剑客”速成攻略

工欲善其事,必先利其器。Vim+Git+Makefile是Linux环境下嵌入式开发常用的工具。本专题主要面向初次接触Linux的新手,熟练掌握工作中常用的工具,在以后的学习和工作中提高效率。

Python代码实现飞机大战

文章目录经典飞机大战一.游戏设定二.我方飞机三.敌方飞机四.发射子弹五.发放补给包六.主模块 经典飞机大战 源代码以及素材资料(图片,音频)可从下面的github中下载: 飞机大战源代码以及素材资料github项目地址链接 ————————————————————————————————————————————————————————— 不知道大家有没有打过飞机,喜不喜欢打飞机。当我第一次接触这个东西的时候,我的内心是被震撼到的。第一次接触打飞机的时候作者本人是身心愉悦的,因为周边的朋友都在打飞机, 每

Python数据清洗实战入门

本次课程主要以真实的电商数据为基础,通过Python详细的介绍了数据分析中的数据清洗阶段各种技巧和方法。

2019 Python开发者日-培训

本次活动将秉承“只讲技术,拒绝空谈”的理念,邀请十余位身处一线的Python技术专家,重点围绕Web开发、自动化运维、数据分析、人工智能等技术模块,分享真实生产环境中使用Python应对IT挑战的真知灼见。此外,针对不同层次的开发者,大会还安排了深度培训实操环节,为开发者们带来更多深度实战的机会。

apache-jmeter-5.1.1(Requires Java 8+).zip

。Apache JMeter 5.1.1 (Requires Java 8+),需要jdk8以上的版本。

数通HCNP中文理论全套教材.rar

内涵HCNP-IENP中文理论书-内文,

Python可以这样学(第四季:数据分析与科学计算可视化)

董付国老师系列教材《Python程序设计(第2版)》(ISBN:9787302436515)、《Python可以这样学》(ISBN:9787302456469)配套视频,在教材基础上又增加了大量内容,通过实例讲解numpy、scipy、pandas、statistics、matplotlib等标准库和扩展库用法。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

我以为我对Mysql事务很熟,直到我遇到了阿里面试官

太惨了,面试又被吊打

2019 AI开发者大会

2019 AI开发者大会(AI ProCon 2019)是由中国IT社区CSDN主办的AI技术与产业年度盛会。多年经验淬炼,如今蓄势待发:2019年9月6-7日,大会将有近百位中美顶尖AI专家、知名企业代表以及千余名AI开发者齐聚北京,进行技术解读和产业论证。我们不空谈口号,只谈技术,诚挚邀请AI业内人士一起共铸人工智能新篇章!

图书管理系统(Java + Mysql)我的第一个完全自己做的实训项目

图书管理系统 Java + MySQL 完整实训代码,MVC三层架构组织,包含所有用到的图片资源以及数据库文件,大三上学期实训,注释很详细,按照阿里巴巴Java编程规范编写

Python数据挖掘简易入门

&nbsp; &nbsp; &nbsp; &nbsp; 本课程为Python数据挖掘方向的入门课程,课程主要以真实数据为基础,详细介绍数据挖掘入门的流程和使用Python实现pandas与numpy在数据挖掘方向的运用,并深入学习如何运用scikit-learn调用常用的数据挖掘算法解决数据挖掘问题,为进一步深入学习数据挖掘打下扎实的基础。

C/C++学习指南全套教程

C/C++学习的全套教程,从基本语法,基本原理,到界面开发、网络开发、Linux开发、安全算法,应用尽用。由毕业于清华大学的业内人士执课,为C/C++编程爱好者的教程。

微信公众平台开发入门

本套课程的设计完全是为初学者量身打造,课程内容由浅入深,课程讲解通俗易懂,代码实现简洁清晰。通过本课程的学习,学员能够入门微信公众平台开发,能够胜任企业级的订阅号、服务号、企业号的应用开发工作。 通过本课程的学习,学员能够对微信公众平台有一个清晰的、系统性的认识。例如,公众号是什么,它有什么特点,它能做什么,怎么开发公众号。 其次,通过本课程的学习,学员能够掌握微信公众平台开发的方法、技术和应用实现。例如,开发者文档怎么看,开发环境怎么搭建,基本的消息交互如何实现,常用的方法技巧有哪些,真实应用怎么开发。

三个项目玩转深度学习(附1G源码)

从事大数据与人工智能开发与实践约十年,钱老师亲自见证了大数据行业的发展与人工智能的从冷到热。事实证明,计算机技术的发展,算力突破,海量数据,机器人技术等,开启了第四次工业革命的序章。深度学习图像分类一直是人工智能的经典任务,是智慧零售、安防、无人驾驶等机器视觉应用领域的核心技术之一,掌握图像分类技术是机器视觉学习的重中之重。针对现有线上学习的特点与实际需求,我们开发了人工智能案例实战系列课程。打造:以项目案例实践为驱动的课程学习方式,覆盖了智能零售,智慧交通等常见领域,通过基础学习、项目案例实践、社群答疑,三维立体的方式,打造最好的学习效果。

2021考研数学张宇基础30讲.pdf

张宇:博士,全国著名考研数学辅导专家,教育部“国家精品课程建设骨干教师”,全国畅销书《张宇高等数学18讲》《张宇线性代数9讲》《张宇概率论与数理统计9讲》《张宇考研数学题源探析经典1000题》《张宇考

专为程序员设计的数学课

<p> 限时福利限时福利,<span>15000+程序员的选择!</span> </p> <p> 购课后添加学习助手(微信号:csdn590),按提示消息领取编程大礼包!并获取讲师答疑服务! </p> <p> <br> </p> <p> 套餐中一共包含5门程序员必学的数学课程(共47讲) </p> <p> 课程1:《零基础入门微积分》 </p> <p> 课程2:《数理统计与概率论》 </p> <p> 课程3:《代码学习线性代数》 </p> <p> 课程4:《数据处理的最优化》 </p> <p> 课程5:《马尔可夫随机过程》 </p> <p> <br> </p> <p> 哪些人适合学习这门课程? </p> <p> 1)大学生,平时只学习了数学理论,并未接触如何应用数学解决编程问题; </p> <p> 2)对算法、数据结构掌握程度薄弱的人,数学可以让你更好的理解算法、数据结构原理及应用; </p> <p> 3)看不懂大牛代码设计思想的人,因为所有的程序设计底层逻辑都是数学; </p> <p> 4)想学习新技术,如:人工智能、机器学习、深度学习等,这门课程是你的必修课程; </p> <p> 5)想修炼更好的编程内功,在遇到问题时可以灵活的应用数学思维解决问题。 </p> <p> <br> </p> <p> 在这门「专为程序员设计的数学课」系列课中,我们保证你能收获到这些:<br> <br> <span> </span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">①价值300元编程课程大礼包</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">②应用数学优化代码的实操方法</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">③数学理论在编程实战中的应用</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">④程序员必学的5大数学知识</span> </p> <p class="ql-long-24357476"> <span class="ql-author-24357476">⑤人工智能领域必修数学课</span> </p> <p> <br> 备注:此课程只讲程序员所需要的数学,即使你数学基础薄弱,也能听懂,只需要初中的数学知识就足矣。<br> <br> 如何听课? </p> <p> 1、登录CSDN学院 APP 在我的课程中进行学习; </p> <p> 2、登录CSDN学院官网。 </p> <p> <br> </p> <p> 购课后如何领取免费赠送的编程大礼包和加入答疑群? </p> <p> 购课后,添加助教微信:<span> csdn590</span>,按提示领取编程大礼包,或观看付费视频的第一节内容扫码进群答疑交流! </p> <p> <img src="https://img-bss.csdn.net/201912251155398753.jpg" alt=""> </p>

DDR5_Draft_Spec_Rev05c.pdf

DDR5 spec

Java面试史上最全的JAVA专业术语面试100问 (前1-50)

前言: 说在前面, 面试题是根据一些朋友去面试提供的,再就是从网上整理了一些。 先更新50道,下一波吧后面的也更出来。 求赞求关注!! 废话也不多说,现在就来看看有哪些面试题 1、面向对象的特点有哪些? 抽象、继承、封装、多态。 2、接口和抽象类有什么联系和区别? 3、重载和重写有什么区别? 4、java有哪些基本数据类型? 5、数组有没有length()方法?String有没有length()方法? 数组没有length()方法,它有length属性。 String有length()方法。 集合求长度用

网络工程师小白入门--【思科CCNA、华为HCNA等网络工程师认证】

本课程适合CCNA或HCNA网络小白同志,高手请绕道,可以直接学习进价课程。通过本预科课程的学习,为学习网络工程师、思科CCNA、华为HCNA这些认证打下坚实的基础! 重要!思科认证2020年2月24日起,已启用新版认证和考试,包括题库都会更新,由于疫情原因,请关注官网和本地考点信息。题库网络上很容易下载到。

C/C++跨平台研发从基础到高阶实战系列套餐

一 专题从基础的C语言核心到c++ 和stl完成基础强化; 二 再到数据结构,设计模式完成专业计算机技能强化; 三 通过跨平台网络编程,linux编程,qt界面编程,mfc编程,windows编程,c++与lua联合编程来完成应用强化 四 最后通过基于ffmpeg的音视频播放器,直播推流,屏幕录像,

Python界面版学生管理系统

前不久上传了一个控制台版本的学生管理系统,这个是Python界面版学生管理系统,这个是使用pycharm开发的一个有界面的学生管理系统,基本的增删改查,里面又演示视频和完整代码,有需要的伙伴可以自行下

2019数学建模A题高压油管的压力控制 省一论文即代码

2019数学建模A题高压油管的压力控制省一完整论文即详细C++和Matlab代码,希望对同学们有所帮助

4小时玩转微信小程序——基础入门与微信支付实战

这是一个门针对零基础学员学习微信小程序开发的视频教学课程。课程采用腾讯官方文档作为教程的唯一技术资料来源。杜绝网络上质量良莠不齐的资料给学员学习带来的障碍。 视频课程按照开发工具的下载、安装、使用、程序结构、视图层、逻辑层、微信小程序等几个部分组织课程,详细讲解整个小程序的开发过程

相关热词 c#中如何设置提交按钮 c#帮助怎么用 c# 读取合并单元格的值 c#带阻程序 c# 替换span内容 c# rpc c#控制台点阵字输出 c#do while循环 c#调用dll多线程 c#找出两个集合不同的
立即提问