排行榜

  • 用户榜
  • 标签榜
  • 冲榜分奖金

频道

最新最热悬赏待采纳 筛选
  • 0

    回答

  • 3

    浏览

救命,有无大佬知道 tf.contrib.layers.apply_regularization 这个在tensorflow2.0中的替代方法是什么啊 我想要这样用: self.l2_loss = tf.contrib.layers.apply_regularization( # regularizer=tf.contrib.layers.l2_regularizer(self.l2_alpha), regularizer=tf.keras.regularizers.l2(self.l2_alpha), weights_list=tf.trainable_variables()) self.loss = tf.reduce_mean(losses) + self.l2_loss contrib在2.0里好像废了,我看回到1.0好像很麻烦,这个方法在2.0里有没有替代啊

  • 2

    回答

  • 8

    浏览

我是在Windows上训练好模型后,用tensorflow.keras.models的load_model方法来加载在Windows上训练得到的模型,请问也是这样解决吗?

  • 0

    回答

  • 5

    浏览

我看有很多教程资料能使Pyinstaller仅打包需要的库以减少文件大小。然而如果程序中使用的库需要调用大型包,例如Pandas numpy torch等时,应该怎么处理以减小文件大小?   参考资料: Python 打包成 exe,太大了该怎么解决? pyinstaller将py转exe后文件体积很大达到100M+,有什么办法可以减小exe体积? 怎么解决anaconda中pyinstaller打包文件过大问题?

  • 0

    回答

  • 4

    浏览

import input_data import model # 变量声明 N_CLASSES = 4  # 四种花类型 IMG_W = 64  # resize图像,太大的话训练时间久 IMG_H = 64 BATCH_SIZE = 20 CAPACITY = 200 MAX_STEP = 2000  # 一般大于10K learning_rate = 0.0001  # 一般小于0.0001 # 获取批次batch train_dir = 'D:/桌面/CDA/7、机器学习/input_data'  # 训练样本的读入路径 logs_train_dir = 'D:/桌面/CDA/7、机器学习/save'  # logs存储路径 # train, train_label = input_data.get_files(train_dir) train, train_label, val, val_label = input_data.get_files(train_dir, 0.3) # 训练数据及标签 train_batch, train_label_batch = input_data.get_batch(train, train_label, IMG_W, IMG_H, BATCH_SIZE, CAPACITY) # 测试数据及标签 val_batch, val_label_batch = input_data.get_batch(val, val_label, IMG_W, IMG_H, BATCH_SIZE, CAPACITY) # 训练操作定义 train_logits = model.inference(train_batch, BATCH_SIZE, N_CLASSES) train_loss = model.losses(train_logits, train_label_batch) train_op = model.trainning(train_loss, learning_rate) train_acc = model.evaluation(train_logits, train_label_batch) # 测试操作定义 test_logits = model.inference(val_batch, BATCH_SIZE, N_CLASSES) test_loss = model.losses(test_logits, val_label_batch) test_acc = model.evaluation(test_logits, val_label_batch) # 这个是log汇总记录 summary_op = tf.summary.merge_all() # 产生一个会话 sess = tf.Session() # 产生一个writer来写log文件 train_writer = tf.summary.FileWriter(logs_train_dir, sess.graph) # val_writer = tf.summary.FileWriter(logs_test_dir, sess.graph) # 产生一个saver来存储训练好的模型 saver = tf.train.Saver() # 所有节点初始化 sess.run(tf.global_variables_initializer()) # 队列监控 coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) # 进行batch的训练 try:     # 执行MAX_STEP步的训练,一步一个batch     for step in np.arange(MAX_STEP):         if coord.should_stop():             break         _, tra_loss, tra_acc = sess.run([train_op, train_loss, train_acc])         # 每隔50步打印一次当前的loss以及acc,同时记录log,写入writer         if step % 10 == 0:             print('Step %d, train loss = %.2f, train accuracy = %.2f%%' % (step, tra_loss, tra_acc * 100.0))             summary_str = sess.run(summary_op)             train_writer.add_summary(summary_str, step)         # 每隔100步,保存一次训练好的模型         if (step + 1) == MAX_STEP:             checkpoint_path = os.path.join(logs_train_dir, 'model.ckpt')             saver.save(sess, checkpoint_path, global_step=step) except tf.errors.OutOfRangeError:     print('Done training -- epoch limit reached') finally:     coord.request_stop()

  • 1

    回答

  • 43

    浏览

question:       I've built a python (3.5) environment on the development board, and it's OK to run the simple tensorflow (version 1.8.0 & 1.12.1) code, but as long as I train the model, it report an error that illegal instruction, and the code is OK on the PC ! When I restore the trained model from PC, it will also report the error. I have checked many websites, but there is no similar problem. How can I solve this error?      举个简单的例子说明,代码如下: import tensorflow as tf import numpy as np x = np.float32(np.random.rand(100, 1)) y = np.dot(x, 0.5) + 0.7 b = tf.Variable(np.float32(0.3)) a = tf.Variable(np.float32(0.3)) y_value = tf.multiply(x, a) + b loss = tf.reduce_mean(tf.square(y_value - y)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) for step in range(0, 100):     print(step)     sess.run(train)     if step % 10 == 0:         print(step, sess.run(loss), sess.run(a), sess.run(b)) 在PC端运行是完全没有问题的, ,,在arm 上运行,出现: 0 Illegal instruction 也就是在sess.run(train)报错,,,,sess.run() 函数在arm上,我测试过,函数可以运行,那就是梯度下降不能正常运行;出现了非法指令的问题; arm CPU相关信息: root@EmbedSky-Board:/xzy/mix# lscpu Architecture: armv7l Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 Model name: ARMv7 Processor rev 10 (v7l) CPU max MHz: 996.0000 CPU min MHz: 792.0000 root@EmbedSky-Board:/xzy/mix# cat /proc/cpuinfo processor : 0 model name : ARMv7 Processor rev 10 (v7l) BogoMIPS : 6.00 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpd32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x2 CPU part : 0xc09 CPU revision : 10 信息补充:arm来自于TQIMX6Q,armv7, NXP i.MX6Q Cortex-A9 4x1GHz, tensorflow 来自于https://www.piwheels.org/simple/;这是问题真是实在是太奇怪了!        

  • 3

    回答

  • 7

    浏览

我的keras版本是2.4.3 在别人github里git了代码https://github.com/titu1994/DenseNet 但是我在用到 from keras.utils.conv_utils import normalize_data_format 这段代码时候显示报错cannot import name 'normalize_data_format' from 'keras.utils.conv_utils' 查了下,网上说可以将代码替换为from keras.backend.common import normalize_data_format 但是这段则显示这段代码显示报错ModuleNotFoundError: No module named 'keras.backend.common'; 'keras.backend' is not a package 迫于某些原因,我不太敢更改keras版本,我想问下各位大佬有没有什么办法找到我这个版本keras库normalize_data_format这个方法位置,关于库的相关运用我很菜,找了下自己Anaconda3\Lib\site-packages\tensorflow\keras\backend里的init文件看不出来。

  • 3

    回答

  • 18

    浏览

def per_class(imagefile): image = Image.open(imagefile) image = image.resize([227, 227]) image_array = np.array(image) image = tf.cast(image_array, tf.float32) #image = tf.image.resize_image_with_crop_or_pad(image, 227, 227) #image = tf.image.per_image_standardization(image) #图形标准化 image = tf.image.resize_image_with_crop_or_pad(image, 227, 227) image = tf.image.per_image_standardization(image) image = tf.reshape(image, [1, 227, 227, 3]) saver = tf.train.Saver() with tf.Session() as sess: save_model = tf.train.latest_checkpoint(r'D:\C盘桌面搬家—桌面\BelgiumTSC_Testing\XUNLIAN') saver.restore(sess, save_model) image = tf.reshape(image, [1, 227, 227, 3]) image = sess.run(image) prediction = sess.run(fc3, feed_dict={x: image}) # print('prediction',prediction) max_index = np.argmax(prediction) imagefile1=imagefile.split('\\')[-1] print(imagefile1+'图片属于第'+str(max_index)+"类交通路标") return max_index # 执行以上程序 imagefiles = r"D:\C盘桌面搬家—桌面\GTSRB_Final_Test_Images\b\1" for root, sub_folders, files in os.walk(imagefiles): for name in files: imagefile = os.path.join(root, name) #print(imagefile) a=per_class(imagefile) 请问图像识别检测算法最后是否要分类检测后的数据呢,还有准确率怎么算呢,可以给个代码吗,十分感谢

  • 1

    回答

  • 21

    浏览

""" After moving all the files using the 1_ file, we run this one to extract the images from the videos and also create a data file we can use for training and testing later. """ import csv import glob import os import os.path import subprocess from subprocess import call def extract_files(): """After we have all of our videos split between train and test, and all nested within folders representing their classes, we need to make a data file that we can reference when training our RNN(s). This will let us keep track of image sequences and other parts of the training process. We'll first need to extract images from each of the videos. We'll need to record the following data in the file: [train|test], class, filename, nb frames Extracting can be done with ffmpeg: `ffmpeg -i video.mpg image-%04d.jpg` """ data_file = [] folders = ['E:/Graduation Project/UCF-101_video_classification-master/data/train/', 'E:/Graduation Project/UCF-101_video_classification-master/data/test/'] for folder in folders: class_folders = glob.glob(folder + '*') for vid_class in class_folders: class_files = glob.glob(vid_class + '/*.avi') for video_path in class_files: # Get the parts of the file. video_parts = get_video_parts(video_path) train_or_test, classname, filename_no_ext, filename = video_parts # Only extract if we haven't done it yet. Otherwise, just get # the info. if not check_already_extracted(video_parts): # Now extract it. src = train_or_test + '/' + classname + '/' + \ filename dest = train_or_test + '/' + classname + '/' + \ filename_no_ext + '-%04d.jpg' subprocess.call(["ffmpeg", "-i", src, dest]) # Now get how many frames it is. nb_frames = get_nb_frames_for_video(video_parts) data_file.append([train_or_test, classname, filename_no_ext, nb_frames]) print("Generated %d frames for %s" % (nb_frames, filename_no_ext)) 运行结果: Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� 一堆这个东西。根据代码最后一行,说明没有成功从视频中提取帧,所以能帮忙看看哪里的问题吗?

  • 2

    回答

  • 39

    浏览

我在做一个音乐声纹的尝试,先把歌曲做人声分离,然后人声段检测出来做了数据分割,然后做了数据集做歌手识别的训练。我这边音乐歌曲还挺多。 现在遇到的问题是。测试样本准确率上不去,一般还有什么办法么,我训练样本5千个的时候,测试样本准确率停留在0.2左右,后来我把训练样本增加到两万了,测试准确率提升到了0.6,但是我现在把训练样本增加到了6万,测试准确率还是停留在0.6 左右 我用的这个项目的网络模型 https://github.com/yeyupiaoling/VoiceprintRecognition-Tensorflow 下一步应该怎么提升在测试样本上的准确率呢。除了增加样本之外(主要一直在增加样本,没什么变化)

  • 2

    回答

  • 40

    浏览

出现这个问题怎么解决?网上找找不到。 module 'subprocess' has no attribute 'src'

  • 4

    回答

  • 29

    浏览

麻烦大神帮忙看看如何解决一下

  • 4

    回答

  • 29

    浏览

我根据视频分类之UCF-101上的CNN方法详解 - 知乎 (zhihu.com)介绍的方法下载了源代码和数据集,第一个py是移动文件代码。按此链接要求: 把数据集文件放在了data下。 其中ucfTrainTestlist文件夹是这样的: 数据集文件夹是这样的: 源代码中,我把两个文件路径改了: """ After extracting the RAR, we run this to move all the files into the appropriate train/test folders. Should only run this file once! """ import os import os.path def get_train_test_lists(version='01'): """ Using one of the train/test files (01, 02, or 03), get the filename breakdowns we'll later use to move everything. """ # Get our files based on version. test_file = 'E:/Graduation Project/UCF-101_video_classification-master/data/ucfTrainTestlist/testlist' + version + '.txt' train_file = 'E:/Graduation Project/UCF-101_video_classification-master/data/ucfTrainTestlist/trainlist' + version + '.txt' # Build the test list. with open(test_file) as fin: test_list = [row.strip() for row in list(fin)] # Build the train list. Extra step to remove the class index. with open(train_file) as fin: train_list = [row.strip() for row in list(fin)] train_list = [row.split(' ')[0] for row in train_list] # Set the groups in a dictionary. file_groups = { 'train': train_list, 'test': test_list } return file_groups def move_files(file_groups): """This assumes all of our files are currently in _this_ directory. So move them to the appropriate spot. Only needs to happen once. """ # Do each of our groups. for group, videos in file_groups.items(): # Do each of our videos. for video in videos: # Get the parts. parts = video.split('/') classname = parts[0] filename = parts[1] # Check if this class exists. if not os.path.exists(group + '/' + classname): print("Creating folder for %s/%s" % (group, classname)) os.makedirs(group + '/' + classname) # Check if we have already moved this file, or at least that it # exists to move. if not os.path.exists(filename): print("Can't find %s to move. Skipping." % (filename)) continue # Move it. dest = group + '/' + classname + '/' + filename print("Moving %s to %s" % (filename, dest)) os.rename(filename, dest) print("Done.") def main(): """ Go through each of our train/test text files and move the videos to the right place. """ # Get the videos in groups so we can move them. group_lists = get_train_test_lists() # Move the files. move_files(group_lists) if __name__ == '__main__': main() 但是运行会出现这样: 请问这是为啥?

  • 0

    回答

  • 7

    浏览

""" After extracting the RAR, we run this to move all the files into the appropriate train/test folders. Should only run this file once! """ import os import os.path def get_train_test_lists(version='01'): """ Using one of the train/test files (01, 02, or 03), get the filename breakdowns we'll later use to move everything. """ # Get our files based on version. test_file = 'E:/Graduation Project/UCF-101_video_classification-master/data/ucfTrainTestlist/testlist' + version + '.txt' train_file = 'E:/Graduation Project/UCF-101_video_classification-master/data/ucfTrainTestlist/trainlist' + version + '.txt' # Build the test list. with open(test_file) as fin: test_list = [row.strip() for row in list(fin)] # Build the train list. Extra step to remove the class index. with open(train_file) as fin: train_list = [row.strip() for row in list(fin)] train_list = [row.split(' ')[0] for row in train_list] # Set the groups in a dictionary. file_groups = { 'train': train_list, 'test': test_list } return file_groups def move_files(file_groups): """This assumes all of our files are currently in _this_ directory. So move them to the appropriate spot. Only needs to happen once. """ # Do each of our groups. for group, videos in file_groups.items(): # Do each of our videos. for video in videos: # Get the parts. parts = video.split('/') classname = parts[0] filename = parts[1] # Check if this class exists. if not os.path.exists(group + '/' + classname): print("Creating folder for %s/%s" % (group, classname)) os.makedirs(group + '/' + classname) # Check if we have already moved this file, or at least that it # exists to move. if not os.path.exists(filename): print("Can't find %s to move. Skipping." % (filename)) continue # Move it. dest = group + '/' + classname + '/' + filename print("Moving %s to %s" % (filename, dest)) os.rename(filename, dest) print("Done.") def main(): """ Go through each of our train/test text files and move the videos to the right place. """ # Get the videos in groups so we can move them. group_lists = get_train_test_lists() # Move the files. move_files(group_lists) if __name__ == '__main__': main() 这样一段代码,按照视频分类之UCF-101上的CNN方法详解 - 知乎 (zhihu.com),改了文件路径。 UCF-101文件夹下是所有视频数据文件夹,如下   按原代码的安排: 但是运行后显示: 请问这是为啥?

  • 3

    回答

  • 39

    浏览

最近因为工作需要在学习神经网络(本人工作10年了,主要做服务器相关的业务),现在是和其他AI部门合作相关能力,本人自己也对这块感兴趣,我的疑问是: 别人的模型对自己学习是否有借鉴意义 ———我理解应该是的,可以看别人的模型参数设置,甚至可以直接拿来设置给自己的数据训练用对吧。?这块具体该如何操作呢,感觉有好几个可视化的方法,到具体怎么操作,我就不太理解了,另外,该如何评估对方的模型是否具备优化空间。?

  • 2

    回答

  • 34

    浏览

一个很简单的代码,只有32行 import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator fashion = tf.keras.datasets.fashion_mnist (x_train, y_train), (x_test, y_test) = fashion.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 x_train = x_train.reshape(x_train.shape[0], 28, 28, 1) # 给数据增加一个维度,使数据和网络结构匹配 image_gen_train = ImageDataGenerator( rescale=1. / 1., # 如为图像,分母为255时,可归至0~1 rotation_range=45, # 随机45度旋转 width_shift_range=.15, # 宽度偏移 height_shift_range=.15, # 高度偏移 horizontal_flip=True, # 水平翻转 zoom_range=0.5 # 将图像随机缩放阈量50% ) image_gen_train.fit(x_train) model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['sparse_categorical_accuracy']) model.fit(image_gen_train.flow(x_train, y_train, batch_size=32), epochs=5, validation_data=(x_test, y_test), validation_freq=1) model.summary() 但是运行后报错: 请问是为啥? 配置环境好像也没问题?用的Anacdoda的tensorflow环境

  • 0

    回答

  • 11

    浏览

  • 1

    回答

  • 25

    浏览

我使用的是Anaconde+cuda10.1+Cudnn+TensorFlow2.1.0+Pytorch1.7.1+Pycharm(python3.6) 本人是机器学习初学者,想跑一个猫狗大战练练手,遇到了下列问题: 个人感觉是我的一些配置出现了问题 老是显示是 '__init__.py' 中找不到引用 运行train.py会出现报错如下图: 在网上也没有找到合适的解决方法,希望哥哥姐姐们可以帮我看一下,万分感谢!!! 代码附在下面(数据集就是猫狗大战的数据集): input_data.py import tensorflow as tf import numpy as np import os '''生成图片路径和标签的List''' def get_files(file_dir): ''' 输入: 存放训练照片的文件地址 返回: 图像列表, 标签列表 ''' cats = [] label_cats = [] dogs = [] label_dogs = [] for file in os.listdir(file_dir): # 用os.listdir函数来返回file_dir路径下所有的图片,file就是要读取的照片 name = file.split(sep='.') # 用file.split 将文件的名字分隔开,分隔的符号是‘.’ if name[0] == 'cat': # 所以只用读取 . 前面这个字符串 cats.append(file_dir + file) label_cats.append(0) # 把图像和标签加入列表 else: dogs.append(file_dir + file) label_dogs.append(1) # 名字是cat就赋值0,名字是dog就赋值1 print('There are %d cats\nThere are %d dogs' % (len(cats), len(dogs)))#打印有多少猫,多少狗 '''对生成的图片路径和标签List做打乱处理''' image_list = np.hstack((cats, dogs)) #使用np.hstack()将cat和dog的图片和标签整合为列表image_list和label_list label_list = np.hstack((label_cats, label_dogs)) temp = np.array([image_list, label_list]) # 将image_list和label_list合并,存放在temp temp = temp.transpose() # 对temp进行转置 np.random.shuffle(temp) # 用shuffle来打乱图片和标签 image_list = list(temp[:, 0]) #从temp中取出乱序后的image_list和label_list列向量 label_list = list(temp[:, 1]) label_list = [int(float(i)) for i in label_list] # 把标签列表转化为int类型 return image_list, label_list ''' 由于数据集较大,需要分批次通过网络,故生成batch''' '''step1:将上面生成的List传入get_batch() ,转换类型,产生一个输入队列queue,因为img和lab是分开的, 所以使用tf.train.slice_input_producer(),然后用tf.read_file()从队列中读取图像''' def get_batch(image, label, image_W, image_H, batch_size, capacity): """ 输入: image,label :要生成batch的图像和标签 image_W,image_H: 图像的宽度和高度 batch_size: 每个batch要放多少张图片 capacity: 一个队列最大多少 image_batch: 4D tensor [batch_size, width, height, 3], dtype=tf.float32 label_batch: 1D tensor [batch_size], dtype=tf.int32 """ image = tf.cast(image, tf.string) #将列表转换成tf能够识别的格式 label = tf.cast(label, tf.int32) """ 队列的理解:每次训练时,从队列中取一个batch送到网络进行训练, 然后又有新的图片从训练库中注入队列,这样循环往复。 队列相当于起到了训练库到网络模型间数据管道的作用, 训练数据通过队列送入网络。 """ input_queue = tf.train.slice_input_producer([image, label])#将image和label合并生成一个队列 # 图像的读取需要tf.read_file(),标签则可以直接赋值 label = input_queue[1] image_contents = tf.read_file(input_queue[0])#然后从队列中分别取出image和label '''step2:将图像解码,不同类型的图像不能混在一起,要么只用jpeg,要么只用png等''' image = tf.image.decode_jpeg(image_contents, channels=3) # 解码彩色的.jpg图像 '''step3:数据预处理,对图像进行旋转、缩放、裁剪、归一化等操作,让计算出的模型更健壮''' image = tf.image.resize_image_with_crop_or_pad(image, image_W, image_H) image = tf.image.per_image_standardization(image) '''step4:生成batch''' image_batch, label_batch = tf.train.batch([image, label], batch_size=batch_size, num_threads=64, # 涉及到线程,配合队列 capacity=capacity) image_batch = tf.cast(image_batch, tf.float32) label_batch = tf.cast(label_batch, tf.int32) #重新排列label,行数为[batch_size] return image_batch, label_batch # %% TEST # To test the generated batches of images # When training the model, DO comment the following codes import matplotlib.pyplot as plt BATCH_SIZE = 2 CAPACITY = 256 IMG_W = 208 IMG_H = 208 train_dir = 'D:/Python/Pycharm_workstation/cats-vs-dogs-master/data/train' image_list, label_list = get_files(train_dir) image_batch, label_batch = get_batch(image_list, label_list, IMG_W, IMG_H, BATCH_SIZE, CAPACITY) with tf.Session() as sess: i = 0 coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) try: while not coord.should_stop() and i < 1: img, label = sess.run([image_batch, label_batch]) # just test one batch for j in np.arange(BATCH_SIZE): print('label: %d' % label[j]) # j-index of quene of Batch_size plt.imshow(img[j, :, :, :]) plt.show() i += 1 except tf.errors.OutOfRangeError: print('done!') finally: coord.request_stop() coord.join(threads)   model.py import tensorflow as tf '''网络结构定义:一个简单的卷积神经网络,卷积+池化层x2,全连接层x2,最后一个softmax层做分类''' def cnn_inference(images, batch_size, n_classes): """ 输入 images 输入的图像 batch_size 每个批次的大小 n_classes n分类 返回 softmax_linear 还差一个softmax 输入参数: images,image batch、4D tensor、tf.float32、[batch_size, width, height, channels] 返回参数: logits, float、 [batch_size, n_classes] """ '''第一层的卷积层conv1:16个3x3的卷积核(3通道),padding=’SAME’,表示padding后卷积的图与原图尺寸一致,激活函数relu()''' with tf.variable_scope('conv1') as scope: # 建立weights和biases的共享变量 # conv1, shape = [kernel size, kernel size, channels, kernel numbers] weights = tf.get_variable('weights', shape=[3, 3, 3, 16], dtype=tf.float32, initializer=tf.truncated_normal_initializer(stddev=0.1, dtype=tf.float32)) # stddev标准差 biases = tf.get_variable('biases', shape=[16], dtype=tf.float32, initializer=tf.constant_initializer(0.1)) conv = tf.nn.conv2d(images, weights, strides=[1, 1, 1, 1], padding='SAME')#padding填充周围有valid(丢弃)和same(补零)可选择 #strides:卷积时在图像每一维的步长(这是一个一维的向量,长度4),第一维和第四维默认为1,第二维和第三维分别是平行和竖直滑行的步进长度 pre_activation = tf.nn.bias_add(conv, biases) # 加入偏差 conv1 = tf.nn.relu(pre_activation, name=scope.name) # 加上激活函数非线性化处理,且是在conv1的命名空间 '''池化层1:2*2最大池化,步长strides为2,池化后执行lrn()操作,局部响应归一化,对训练有利 第一层的池化层pool1和规范化norm1(特征缩放)''' with tf.variable_scope('pooling1_lrn') as scope: pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pooling1') norm1 = tf.nn.lrn(pool1, depth_radius=4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1') # ksize是池化窗口的大小=[1,height,width,1],一般height=width=池化窗口的步长 # 池化窗口的步长一般是比卷积核多移动一位 # tf.nn.lrn是Local Response Normalization,(局部响应归一化)将输入小于0的值幅值为0,输入大于0的值不变 # 第二层的卷积层cov2,这里的命名空间和第一层不一样,所以可以和第一层取同名 with tf.variable_scope('conv2') as scope: weights = tf.get_variable('weights', shape=[3, 3, 16, 16], # 这里只有第三位数字16需要等于上一层的tensor维度 dtype=tf.float32, initializer=tf.truncated_normal_initializer(stddev=0.1, dtype=tf.float32)) biases = tf.get_variable('biases', shape=[16], dtype=tf.float32, initializer=tf.constant_initializer(0.1)) conv = tf.nn.conv2d(norm1, weights, strides=[1, 1, 1, 1], padding='SAME') pre_activation = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(pre_activation, name='conv2') # 第二层的池化层pool2和规范化norm2 with tf.variable_scope('pooling2_lrn') as scope: norm2 = tf.nn.lrn(conv2, depth_radius=4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm2') pool2 = tf.nn.max_pool(norm2, ksize=[1, 2, 2, 1], strides=[1, 1, 1, 1], padding='SAME', name='pooling2') # 这里选择了先规范化再池化 # 第三层为全连接层local3,128个神经元,将之前pool层的输出reshape成一行,激活函数relu() with tf.variable_scope('local3') as scope: # flatten-把卷积过的多维tensor拉平成二维张量(矩阵) reshape = tf.reshape(pool2, shape=[batch_size, -1]) # batch_size表明了有多少个样本 dim = reshape.get_shape()[1].value # 知道-1(代表任意)这里具体是多少个 weights = tf.get_variable('weights', shape=[dim, 128], # 连接128个神经元 dtype=tf.float32, initializer=tf.truncated_normal_initializer(stddev=0.005, dtype=tf.float32)) biases = tf.get_variable('biases', shape=[128], dtype=tf.float32, initializer=tf.constant_initializer(0.1)) local3 = tf.nn.relu(tf.matmul(reshape, weights) + biases, name=scope.name) # 矩阵相乘加上bias # 第四层为全连接层local4 with tf.variable_scope('local4') as scope: weights = tf.get_variable('weights', shape=[128, 128], # 再连接128个神经元 dtype=tf.float32, initializer=tf.truncated_normal_initializer(stddev=0.005, dtype=tf.float32)) biases = tf.get_variable('biases', shape=[128], dtype=tf.float32, initializer=tf.constant_initializer(0.1)) local4 = tf.nn.relu(tf.matmul(local3, weights) + biases, name='local4') # 第五层为输出层softmax_linear,Softmax回归层 将前面的FC层输出,做一个线性回归,计算出每一类的得分,在这里是2类,所以这个层输出的是两个得分 with tf.variable_scope('softmax_linear') as scope: weights = tf.get_variable('weights', shape=[128, n_classes], dtype=tf.float32, initializer=tf.truncated_normal_initializer(stddev=0.005, dtype=tf.float32)) biases = tf.get_variable('biases', shape=[n_classes], dtype=tf.float32, initializer=tf.constant_initializer(0.1)) softmax_linear = tf.add(tf.matmul(local4, weights), biases, name='softmax_linear') # 这里只是命名为softmax_linear,真正的softmax函数放在下面的losses函数里面和交叉熵结合在一起了,这样可以提高运算速度。 # softmax_linear的行数=local4的行数,列数=weights的列数=bias的行数=需要分类的个数 # 经过softmax函数用于分类过程中,它将多个神经元的输出,映射到(0,1)区间内,可以看成概率来理解 return softmax_linear '''loss计算 将网络计算得出的每类得分与真实值进行比较,得出一个loss损失值,这个值代表了计算值与期望值的差距。这里使用的loss函数是交叉熵。 一批loss取平均数。最后调用了summary.scalar()记录下这个标量数据,在TensorBoard中进行可视化 ''' def losses(logits, labels): """ 输入 logits: 经过cnn_inference处理过的tensor labels: 对应的标签 返回 loss: 损失函数(交叉熵) """ with tf.variable_scope('loss') as scope: cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels, name='loss_per_eg') '''tf.nn.sparse_softmax_cross_entropy_with_logits()放在图片上,就是对一个像素深度方向的向量,与对应的标签做交叉熵, 再求一个像素深度方向交叉熵的和。loss值是对所有像素点的交叉熵和取平均。''' loss = tf.reduce_mean(cross_entropy, name='loss') # 求所有样本的平均loss tf.summary.scalar(scope.name + '/loss', loss) return loss '''loss损失值优化 :目的就是让loss越小越好,使用的是AdamOptimizer优化器 函数:def trainning(loss, learning_rate)''' def training(loss, learning_rate): """ 输入 loss: 损失函数(交叉熵) learning_rate: 学习率 返回 train_op: 训练的最优值 """ with tf.name_scope('optimizer'): optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) # global_step不是共享变量,初始值为0,设定trainable=False 可以防止该变量被数据流图的 GraphKeys.TRAINABLE_VARIABLES 收集, # 这样我们就不会在训练的时候尝试更新它的值。 global_step = tf.Variable(0, name='global_step', trainable=False) train_op = optimizer.minimize(loss, global_step=global_step) return train_op '''评价/准确率计算:计算出平均准确率来评价这个模型,在训练过程中按批次计算(每隔N步计算一次),可以看到准确率的变换情况。 函数:def evaluation(logits, labels):''' def evaluation(logits, labels): """ 输入 logits: 经过cnn_inference处理过的tensor labels: 返回 accuracy:正确率 """ with tf.variable_scope('accuracy') as scope: prediction = tf.nn.softmax(logits) # Softmax简单的说就是把一个N*1的向量归一化为(0,1)之间的值,由于其中采用指数运算,使得向量中数值较大的量特征更加明显 # prediction每行的最大元素(1)的索引和label的值相同则为1 否则为0。 correct = tf.nn.in_top_k(prediction, labels, 1) #tf.nn.in_top_k组要是用于计算预测的结果和实际结果的是否相等 # correct = tf.nn.in_top_k(logits, labels, 1) 也可以不需要prediction过渡,因为最大值的索引没变,这里这样写是为了更好理解 correct = tf.cast(correct, tf.float16) # tf.cast() 函数的作用是执行 tensorflow 中张量数据类型转换 accuracy = tf.reduce_mean(correct) tf.summary.scalar(scope.name + '/accuracy', accuracy)#通过函数tf.summary.scalar()记录的数据的变化趋势 return accuracy   train.py import os import numpy as np import tensorflow as tf import input_data import model N_CLASSES = 2 # 猫和狗 IMG_W = 208 # resize图像,太大的话训练时间久 IMG_H = 208 BATCH_SIZE = 16#一次迭代的图片数量(50) CAPACITY = 2000 MAX_STEP = 10000 # 一般5K~10k learning_rate = 0.0001 # 一般小于0.0001 train_dir = 'D:/Python/Pycharm_workstation/cats-vs-dogs-master/data/train' logs_train_dir = 'D:/Python/Pycharm_workstation/catsvsdogs/log/' # 记录训练过程与保存模型 '''获取批次batch''' train, train_label = input_data.get_files(train_dir) train_batch,train_label_batch=input_data.get_batch(train, train_label, IMG_W, IMG_H, BATCH_SIZE, CAPACITY) '''操作定义''' train_logits = model.cnn_inference(train_batch, BATCH_SIZE, N_CLASSES) train_loss = model.losses(train_logits, train_label_batch) train_op = model.training(train_loss, learning_rate) train__acc = model.evaluation(train_logits, train_label_batch) summary_op = tf.summary.merge_all() #自动管理模式 #产生一个会话 sess = tf.Session() #产生一个writer来写log文件 train_writer = tf.summary.FileWriter(logs_train_dir, sess.graph) #产生一个saver来存储训练好的模型 saver = tf.train.Saver() #所有节点初始化 sess.run(tf.global_variables_initializer()) '''TensorFlow提供了两个类来实现对Session中多线程的管理:tf.Coordinator和 tf.QueueRunner,这两个类往往一起使用。''' coord = tf.train.Coordinator() '''Coordinator类用来管理在Session中的多个线程,可以用来同时停止多个工作线程并且向那个在等待所有工作线程终止的程序报告异常, 该线程捕获到这个异常之后就会终止所有线程。使用 tf.train.Coordinator()来创建一个线程管理器(协调器)对象 QueueRunner类用来启动tensor的入队线程,可以用来启动多个工作线程同时将多个tensor(训练数据)推送入文件名称队列中,具体执行函数是 tf.train.start_queue_runners , 只有调用 tf.train.start_queue_runners 之后,才会真正把tensor推入内存序列中,供计算单元调用,否则 会由于内存序列为空,数据流图会处于一直等待状态。 ''' threads = tf.train.start_queue_runners(sess=sess, coord=coord) '''进行batch的训练''' try: #执行MAX_STEP步的训练,一步一个batch for step in np.arange(MAX_STEP): if coord.should_stop(): break _, tra_loss, tra_acc = sess.run([train_op, train_loss, train__acc]) #每隔50步打印一次当前的loss以及acc,同时记录log,写入writer if step % 50 == 0: print('Step %d, train loss = %.2f, train accuracy = %.2f%%' %(step, tra_loss, tra_acc*100.0)) summary_str = sess.run(summary_op) train_writer.add_summary(summary_str, step) #每隔2000步,保存一次训练好的模型 if step % 2000 == 0 or (step + 1) == MAX_STEP: checkpoint_path = os.path.join(logs_train_dir, 'model.ckpt') saver.save(sess, checkpoint_path, global_step=step) except tf.errors.OutOfRangeError: #如果读取到文件队列末尾会抛出此异常 print('Done training -- epoch limit reached') finally: coord.request_stop()# 协调器coord发出所有线程终止信号 coord.join(threads) sess.close()  

  • 1

    回答

  • 12

    浏览

机器学习训练出来的模型方程式,和用数学软件(比如excel、origin)拟合出来的方程式有什么区别?

  • 1

    回答

  • 11

    浏览

import tensorflow as tf import matplotlib.pyplot as plt fashion_mnist = tf.keras.datasets.fashion_mnist (train_images,train_labels),(test_images,test_labels) = fashion_mnist.load_data() #导入数据 plt.imshow(train_images[0]) #print(train_images[0]) #print(train_labels[0]) plt.show() print(train_images.shape) print(test_images.shape) train_images = train_images/255.0 test_images = test_images/255.0 model = tf.keras.models.Sequential([tf.keras.layers.Flatten(), tf.keras.layers.Dense(128,activation = tf.nn.relu), tf.keras.layers.Dense(10,activation = tf.nn.softmax)]) model.compile(optimizer = 'Adam',loss='sparse_categorical_crossentropy') model.fit(train_images,train_labels,epochs=5) model.evaluate(test_images,test_labels) 如下是我的结果:    用shape测试过训练集是60000和验证集是10000,但最后训练数目和验证数目为1875和313,为原来的1/32,为啥????   

  • 0

    回答

  • 4

    浏览

水果分类任务,限定苹果、橘子、香蕉。本人已通过pycharm用类VGG卷积神经网络训练好了.h5权重文件和.pb模型,现在计划调用笔记本摄像头,摄像头一直开着,发现苹果时能将苹果框出来,打印输出一个0。发现橘子框起来,打印输出一个1,香蕉框起来打印输出一个2。 卷积神经网络可以进行图片预测,但是怎么能够实时检测呢?没有一点思路请各位前辈指点指点,谢谢。

  • 3

    回答

  • 19

    浏览

from flask import Flask, jsonify, request, redirect, render_template ImportError: No module named flask  

  • 0

    回答

  • 11

    浏览

import numpy as np from tensorflow import keras from tensorflow.keras.preprocessing import image from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2 import tensorflow as tf import os os.environ['CUDA_VISIBLE_DEVICES'] = '/gpu:0'   target_size = 96 base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(target_size,target_size,3)) model = keras.Sequential([ base_model, keras.layers.GlobalAveragePooling2D(), keras.layers.Dropout(0.5), keras.layers.Dense(1024, activation='relu'), keras.layers.Dense(5) ]) train_path = 'C:/Users/11500/Desktop/ai人工智能导论/垃圾分类data/垃圾分类/training/' test_path = 'C:/Users/11500/Desktop/ai人工智能导论/垃圾分类data/垃圾分类/test/' train_data = ImageDataGenerator( rescale=1./225, #数值归一化 ) test_data = ImageDataGenerator( rescale=1./225, #数值归一化 ) train_generator = train_data.flow_from_directory( train_path, target_size=(target_size, target_size), batch_size=4, class_mode='categorical', seed=0) test_generator = train_data.flow_from_directory( test_path, target_size=(target_size, target_size), batch_size=4, class_mode='categorical', seed=0) labels = train_generator.class_indices print(labels) labels = dict((v, k) for k, v in labels.items()) print(labels) def scheduler(epoch, lr): ''' 学习率调整策略函数: 可以尝试在不同的epoch之间使用不同的学习率 ''' if epoch < 2: return lr else: return lr * 0.1 lr_callback = keras.callbacks.LearningRateScheduler(schedule=scheduler, verbose=1) #模型保存策略 root = '.eckpointsapter01' folder = 'chapter01' name = 'mobilenet' ckpt_callback = keras.callbacks.ModelCheckpoint( filepath = os.path.join(root, folder, name + '-ep{epoch:03d}-loss{loss:.3f}-val_accuracy{val_accuracy:.3f}.h5'), monitor='val_loss', #monitor:需要监视的值 save_weights_only=False, # 保存整个模型 save_best_only=False, mode='auto', period=1, #保存模型的间隔数,1表示每个epoch训练结束后都会保存一个模型 ) callback = [ckpt_callback, lr_callback] SGD = keras.optimizers.SGD(lr=0.001, momentum=0.9) loss = keras.losses.CategoricalCrossentropy(from_logits=True) model.compile(optimizer=SGD, loss=loss, metrics=['accuracy']) model.fit_generator( generator = train_generator, epochs = 4, steps_per_epoch = len(train_generator), validation_data = test_generator, validation_steps = len(test_generator), callbacks=callback )   我的代码如上: 报错如下: 事先自己新建好了这三个文件夹  求问,哪里出问题了? 

  • 0

    回答

  • 5

    浏览

model.compile时候,如果使用metrics=['acc'],在model.evalute时候,可以返回loss,acc二个结果,但是如果没有在compile时候使用metrics参数,只能返回一个loss,求指导,求原因?

  • 5

    回答

  • 27

    浏览

如图。代码是现成源代码是对的。但是结果总出现这个问题。我查看了文件夹里的python是3.7版本的,在anaconda prompt中也把版本numpy版本换到了1.20.1,但还是不行???为啥

  • 3

    回答

  • 23

    浏览

Windows系统下,pycharm编译器运行.py文件出现:Fatal Python error: Aborted错误。具体情况如下:Thread 0x000013b8 (most recent call first):   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1429 in _call_tf_sessionrun   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1341 in _run_fn   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1356 in _do_call   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1350 in _do_run   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1173 in _run   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 950 in run   File "F:\c\u76d8\Learning-to-See-in-the-Dark-master\test_Sony.py", line 142 in <module>   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\assertion\rewrite.py", line 170 in exec_module   File "<frozen importlib._bootstrap>", line 665 in _load_unlocked   File "<frozen importlib._bootstrap>", line 955 in _find_and_load_unlocked   File "<frozen importlib._bootstrap>", line 971 in _find_and_load   File "<frozen importlib._bootstrap>", line 994 in _gcd_import   File "D:\anaconda\envs\tensorflow-gpu\lib\importlib\__init__.py", line 126 in import_module   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\pathlib.py", line 524 in import_path   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\python.py", line 578 in _importtestmodule   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\python.py", line 500 in _getobj   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\python.py", line 291 in obj   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\python.py", line 516 in _inject_setup_module_fixture   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\python.py", line 503 in collect   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\runner.py", line 341 in <lambda>   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\runner.py", line 311 in from_call   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\runner.py", line 341 in pytest_make_collect_report   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\callers.py", line 187 in _multicall   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\manager.py", line 87 in <lambda>   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\manager.py", line 93 in _hookexec   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\hooks.py", line 286 in __call__   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\runner.py", line 458 in collect_one_node   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\main.py", line 808 in genitems   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\main.py", line 634 in perform_collect   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\main.py", line 333 in pytest_collection   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\callers.py", line 187 in _multicall   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\manager.py", line 87 in <lambda>   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\manager.py", line 93 in _hookexec   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\hooks.py", line 286 in __call__   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\main.py", line 322 in _main   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\main.py", line 269 in wrap_session   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\main.py", line 316 in pytest_cmdline_main   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\callers.py", line 187 in _multicall   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\manager.py", line 87 in <lambda>   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\manager.py", line 93 in _hookexec   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\pluggy\hooks.py", line 286 in __call__   File "D:\anaconda\envs\tensorflow-gpu\lib\site-packages\_pytest\config\__init__.py", line 163 in main   File "D:\PyCharm\PyCharm Community Edition 2020.2.4\plugins\python-ce\helpers\pycharm\_jb_pytest_runner.py", line 43 in <module> 请问这个问题要怎么解决?

  • 0

    回答

  • 8

    浏览

AttributeError: 'KerasClassifier' object has no attribute 'save' 请问怎么保存keras交叉验证的模型啊?stackflow上感觉太简洁了,对菜鸟不是很友好。 求教大佬  

回答 小白炼丹师
采纳率100%
11天前
  • 1

    回答

  • 11

    浏览

import tensorflow as tf batch_size = 10 learning_rate = 0.001 tf.train.AdamOptimizer(learning_rate) 所以: The learning_rate of a data is 0.0001 or 0.001 ? The sum learning_rate of a batch is 0.01 or 0.001 ?