排行榜

  • 用户榜
  • 标签榜
  • 冲榜分奖金

频道

最新最热悬赏待采纳 筛选
  • 0

    回答

  • 3

    浏览

请问大家,为什么我网络输入数据是没有脏数据的,直接用的OPENAI开源数据集,然后经过一个卷积层的输出变成了nan,在这个过程中损失值一直在正常下降,当损失值下降到0.017左右的时候,卷积网络的输出变成了nan

  • 0

    回答

  • 13

    浏览

question:       I've built a python (3.5) environment on the development board, and it's OK to run the simple tensorflow (version 1.8.0 & 1.12.1) code, but as long as I train the model, it report an error that illegal instruction, and the code is OK on the PC ! When I restore the trained model from PC, it will also report the error. I have checked many websites, but there is no similar problem. How can I solve this error?      举个简单的例子说明,代码如下: import tensorflow as tf import numpy as np x = np.float32(np.random.rand(100, 1)) y = np.dot(x, 0.5) + 0.7 b = tf.Variable(np.float32(0.3)) a = tf.Variable(np.float32(0.3)) y_value = tf.multiply(x, a) + b loss = tf.reduce_mean(tf.square(y_value - y)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) for step in range(0, 100):     print(step)     sess.run(train)     if step % 10 == 0:         print(step, sess.run(loss), sess.run(a), sess.run(b)) 在PC端运行是完全没有问题的, ,,在arm 上运行,出现: 0 Illegal instruction 也就是在sess.run(train)报错,,,,sess.run() 函数在arm上,我测试过,函数可以运行,那就是梯度下降不能正常运行;出现了非法指令的问题; arm CPU相关信息: root@EmbedSky-Board:/xzy/mix# lscpu Architecture: armv7l Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 Model name: ARMv7 Processor rev 10 (v7l) CPU max MHz: 996.0000 CPU min MHz: 792.0000 root@EmbedSky-Board:/xzy/mix# cat /proc/cpuinfo processor : 0 model name : ARMv7 Processor rev 10 (v7l) BogoMIPS : 6.00 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpd32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x2 CPU part : 0xc09 CPU revision : 10 信息补充:arm来自于TQIMX6Q,armv7, NXP i.MX6Q Cortex-A9 4x1GHz, tensorflow 来自于https://www.piwheels.org/simple/;这是问题真是实在是太奇怪了!        

  • 0

    回答

  • 5

    浏览

请问YOLOv4可以和LSTM网络结合吗?用YOLOv4检测结果,用LSTM来计算时间。

  • 0

    回答

  • 6

    浏览

不知道models是什么模块,有大神能解释一下吗  

  • 0

    回答

  • 5

    浏览

usage: video.py [-h] [--device DEVICE] [--camera-id CAMERA_ID] [--out OUT]                 [--score-thr SCORE_THR]                 config checkpoint video.py: error: unrecognized arguments: --file /home/cust02/SH.mp4  

  • 2

    回答

  • 4

    浏览

我的keras版本是2.4.3 在别人github里git了代码https://github.com/titu1994/DenseNet 但是我在用到 from keras.utils.conv_utils import normalize_data_format 这段代码时候显示报错cannot import name 'normalize_data_format' from 'keras.utils.conv_utils' 查了下,网上说可以将代码替换为from keras.backend.common import normalize_data_format 但是这段则显示这段代码显示报错ModuleNotFoundError: No module named 'keras.backend.common'; 'keras.backend' is not a package 迫于某些原因,我不太敢更改keras版本,我想问下各位大佬有没有什么办法找到我这个版本keras库normalize_data_format这个方法位置,关于库的相关运用我很菜,找了下自己Anaconda3\Lib\site-packages\tensorflow\keras\backend里的init文件看不出来。

  • 0

    回答

  • 16

    浏览

""" After moving all the files using the 1_ file, we run this one to extract the images from the videos and also create a data file we can use for training and testing later. """ import csv import glob import os import os.path import subprocess from subprocess import call def extract_files(): """After we have all of our videos split between train and test, and all nested within folders representing their classes, we need to make a data file that we can reference when training our RNN(s). This will let us keep track of image sequences and other parts of the training process. We'll first need to extract images from each of the videos. We'll need to record the following data in the file: [train|test], class, filename, nb frames Extracting can be done with ffmpeg: `ffmpeg -i video.mpg image-%04d.jpg` """ data_file = [] folders = ['E:/Graduation Project/UCF-101_video_classification-master/data/train/', 'E:/Graduation Project/UCF-101_video_classification-master/data/test/'] for folder in folders: class_folders = glob.glob(folder + '*') for vid_class in class_folders: class_files = glob.glob(vid_class + '/*.avi') for video_path in class_files: # Get the parts of the file. video_parts = get_video_parts(video_path) train_or_test, classname, filename_no_ext, filename = video_parts # Only extract if we haven't done it yet. Otherwise, just get # the info. if not check_already_extracted(video_parts): # Now extract it. src = train_or_test + '/' + classname + '/' + \ filename dest = train_or_test + '/' + classname + '/' + \ filename_no_ext + '-%04d.jpg' subprocess.call(["ffmpeg", "-i", src, dest]) # Now get how many frames it is. nb_frames = get_nb_frames_for_video(video_parts) data_file.append([train_or_test, classname, filename_no_ext, nb_frames]) print("Generated %d frames for %s" % (nb_frames, filename_no_ext)) 运行结果: Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� ���������ļ��� Generated 0 frames for data 'ffmpeg' �����ڲ����ⲿ���Ҳ���ǿ����еij��� 一堆这个东西。根据代码最后一行,说明没有成功从视频中提取帧,所以能帮忙看看哪里的问题吗?

  • 2

    回答

  • 36

    浏览

我在做一个音乐声纹的尝试,先把歌曲做人声分离,然后人声段检测出来做了数据分割,然后做了数据集做歌手识别的训练。我这边音乐歌曲还挺多。 现在遇到的问题是。测试样本准确率上不去,一般还有什么办法么,我训练样本5千个的时候,测试样本准确率停留在0.2左右,后来我把训练样本增加到两万了,测试准确率提升到了0.6,但是我现在把训练样本增加到了6万,测试准确率还是停留在0.6 左右 我用的这个项目的网络模型 https://github.com/yeyupiaoling/VoiceprintRecognition-Tensorflow 下一步应该怎么提升在测试样本上的准确率呢。除了增加样本之外(主要一直在增加样本,没什么变化)

  • 2

    回答

  • 17

    浏览

出现这个问题怎么解决?网上找找不到。 module 'subprocess' has no attribute 'src'

  • 3

    回答

  • 32

    浏览

在做关于生成对抗样本的软件界面,但是不知道如何通过按钮或其他方式,实现生成对抗样本的程序的运行。求求大佬们救救孩子    程序是一个比较大的程序,不是简单的函数

  • 4

    回答

  • 22

    浏览

我根据视频分类之UCF-101上的CNN方法详解 - 知乎 (zhihu.com)介绍的方法下载了源代码和数据集,第一个py是移动文件代码。按此链接要求: 把数据集文件放在了data下。 其中ucfTrainTestlist文件夹是这样的: 数据集文件夹是这样的: 源代码中,我把两个文件路径改了: """ After extracting the RAR, we run this to move all the files into the appropriate train/test folders. Should only run this file once! """ import os import os.path def get_train_test_lists(version='01'): """ Using one of the train/test files (01, 02, or 03), get the filename breakdowns we'll later use to move everything. """ # Get our files based on version. test_file = 'E:/Graduation Project/UCF-101_video_classification-master/data/ucfTrainTestlist/testlist' + version + '.txt' train_file = 'E:/Graduation Project/UCF-101_video_classification-master/data/ucfTrainTestlist/trainlist' + version + '.txt' # Build the test list. with open(test_file) as fin: test_list = [row.strip() for row in list(fin)] # Build the train list. Extra step to remove the class index. with open(train_file) as fin: train_list = [row.strip() for row in list(fin)] train_list = [row.split(' ')[0] for row in train_list] # Set the groups in a dictionary. file_groups = { 'train': train_list, 'test': test_list } return file_groups def move_files(file_groups): """This assumes all of our files are currently in _this_ directory. So move them to the appropriate spot. Only needs to happen once. """ # Do each of our groups. for group, videos in file_groups.items(): # Do each of our videos. for video in videos: # Get the parts. parts = video.split('/') classname = parts[0] filename = parts[1] # Check if this class exists. if not os.path.exists(group + '/' + classname): print("Creating folder for %s/%s" % (group, classname)) os.makedirs(group + '/' + classname) # Check if we have already moved this file, or at least that it # exists to move. if not os.path.exists(filename): print("Can't find %s to move. Skipping." % (filename)) continue # Move it. dest = group + '/' + classname + '/' + filename print("Moving %s to %s" % (filename, dest)) os.rename(filename, dest) print("Done.") def main(): """ Go through each of our train/test text files and move the videos to the right place. """ # Get the videos in groups so we can move them. group_lists = get_train_test_lists() # Move the files. move_files(group_lists) if __name__ == '__main__': main() 但是运行会出现这样: 请问这是为啥?

  • 0

    回答

  • 7

    浏览

""" After extracting the RAR, we run this to move all the files into the appropriate train/test folders. Should only run this file once! """ import os import os.path def get_train_test_lists(version='01'): """ Using one of the train/test files (01, 02, or 03), get the filename breakdowns we'll later use to move everything. """ # Get our files based on version. test_file = 'E:/Graduation Project/UCF-101_video_classification-master/data/ucfTrainTestlist/testlist' + version + '.txt' train_file = 'E:/Graduation Project/UCF-101_video_classification-master/data/ucfTrainTestlist/trainlist' + version + '.txt' # Build the test list. with open(test_file) as fin: test_list = [row.strip() for row in list(fin)] # Build the train list. Extra step to remove the class index. with open(train_file) as fin: train_list = [row.strip() for row in list(fin)] train_list = [row.split(' ')[0] for row in train_list] # Set the groups in a dictionary. file_groups = { 'train': train_list, 'test': test_list } return file_groups def move_files(file_groups): """This assumes all of our files are currently in _this_ directory. So move them to the appropriate spot. Only needs to happen once. """ # Do each of our groups. for group, videos in file_groups.items(): # Do each of our videos. for video in videos: # Get the parts. parts = video.split('/') classname = parts[0] filename = parts[1] # Check if this class exists. if not os.path.exists(group + '/' + classname): print("Creating folder for %s/%s" % (group, classname)) os.makedirs(group + '/' + classname) # Check if we have already moved this file, or at least that it # exists to move. if not os.path.exists(filename): print("Can't find %s to move. Skipping." % (filename)) continue # Move it. dest = group + '/' + classname + '/' + filename print("Moving %s to %s" % (filename, dest)) os.rename(filename, dest) print("Done.") def main(): """ Go through each of our train/test text files and move the videos to the right place. """ # Get the videos in groups so we can move them. group_lists = get_train_test_lists() # Move the files. move_files(group_lists) if __name__ == '__main__': main() 这样一段代码,按照视频分类之UCF-101上的CNN方法详解 - 知乎 (zhihu.com),改了文件路径。 UCF-101文件夹下是所有视频数据文件夹,如下   按原代码的安排: 但是运行后显示: 请问这是为啥?

  • 3

    回答

  • 16

    浏览

为什么用自己训练的模型,使用官方的yolov5代码,预测时能正常运行,但是结果只有图片,没有预测结果

  • 0

    回答

  • 7

    浏览

RuntimeError: Error compiling objects for extension 搭建pytorch环境时这个问题怎么解决?

回答 wlt0814
采纳率0%
昨天
  • 2

    回答

  • 33

    浏览

最近因为工作需要在学习神经网络(本人工作10年了,主要做服务器相关的业务),现在是和其他AI部门合作相关能力,本人自己也对这块感兴趣,我的疑问是: 别人的模型对自己学习是否有借鉴意义 ———我理解应该是的,可以看别人的模型参数设置,甚至可以直接拿来设置给自己的数据训练用对吧。?这块具体该如何操作呢,感觉有好几个可视化的方法,到具体怎么操作,我就不太理解了,另外,该如何评估对方的模型是否具备优化空间。?

  • 13

    回答

  • 49

    浏览

车载定位终端,要求有国六标准的加密芯片,有什么可以推荐吗?  

  • 0

    回答

  • 8

    浏览

这是nuke13的深度学习节点报错说是CUDA_CACHE_MAXSIZE的值设置太小,网上翻了半天教程配置环境都没解决,我用的是3090的显卡,本来在其他电脑上用1060的显卡使用深度学习节点都没有问题。到3090电脑上就出现这个问题了。

  • 4

    回答

  • 18

    浏览

import os import config import shutil for split in (config.TRAIN,config.VALID,config.TEST): print('[INFO] processing {} split:'.format(split)) imagePaths=os.listdir(os.path.join(config.ORIG_DATA_PATH,split)) for ele in imagePaths: if not ele.endswith('.jpg'): imagePaths.remove(ele) for imagePath in imagePaths: label=config.CLASSES[int(imagePath.split('_')[0])] dst=os.path.join(config.BASE_PATH,split,label) if not os.path.exists(dst): os.makedirs(dst) #复制图片 shutil.copy2(os.path.join(config.ORIG_DATA_PATH,config.TRAIN,imagePath), os.path.join(dst,imagePath)) print('[INFO] All is done' ) 这是卷积神经网络的训练集测试集验证集按类别进行划分,我觉得倒数第二行的代码中的config.TRAIN应该改为split,不知道对不对,请解答一下

  • 2

    回答

  • 34

    浏览

一个很简单的代码,只有32行 import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator fashion = tf.keras.datasets.fashion_mnist (x_train, y_train), (x_test, y_test) = fashion.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 x_train = x_train.reshape(x_train.shape[0], 28, 28, 1) # 给数据增加一个维度,使数据和网络结构匹配 image_gen_train = ImageDataGenerator( rescale=1. / 1., # 如为图像,分母为255时,可归至0~1 rotation_range=45, # 随机45度旋转 width_shift_range=.15, # 宽度偏移 height_shift_range=.15, # 高度偏移 horizontal_flip=True, # 水平翻转 zoom_range=0.5 # 将图像随机缩放阈量50% ) image_gen_train.fit(x_train) model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['sparse_categorical_accuracy']) model.fit(image_gen_train.flow(x_train, y_train, batch_size=32), epochs=5, validation_data=(x_test, y_test), validation_freq=1) model.summary() 但是运行后报错: 请问是为啥? 配置环境好像也没问题?用的Anacdoda的tensorflow环境

  • 0

    回答

  • 4

    浏览

ubuntu18.04系统下python怎么调用yolov4的检测结果?就比如说yolov4每检测出目标一次,另一个程序就进行一次计数(只是这样举个例子),谢谢。

  • 3

    回答

  • 15

    浏览

使用python实现BP神经网络或LSTM长短期记忆网络进行超短期功率预测,样本从文档中读取,输入数据需要归一化处理,UI界面展示

  • 3

    回答

  • 18

    浏览

火的纹理特征主要表现在边缘与颜色,想控制它在相同的边缘生成不同的颜色,或者在相同的颜色下生成不同的边缘,通过GAN实现一对一的翻译任务是简单,但实现一对多的翻译任务是困难的,查了很多资料都没有结果,希望大家能给我提供一些灵感。  

  • 1

    回答

  • 23

    浏览

基于百度的paddle实现语音识别,训练这个模型时需要多久,有训练多的大佬吗,希望可以回答一下

  • 5

    回答

  • 16

    浏览

#include <OpenMS/KERNEL/MSSpectrum.h> //这里出错了,说是无法打开openms/kernel/msspectrum.h #include<iostream> using namespace OpenMS; /这个地方也出错了,说是,名称必须是命名空间名 using namespace std; int main() { // Create spectrum MSSpectrum spectrum; //这个地方也出错,我怀疑还是头文件不对,或者命名空间 Peak1D peak; //这个地方也出错,我怀疑还是头文件不对 for (float mz = 1500.0; mz >= 500; mz -= 100.0) { peak.setMZ(mz); spectrum.push_back(peak); } // Sort the peaks according to ascending mass-to-charge ratio spectrum.sortByPosition(); // Iterate over spectrum of those peaks between 800 and 1000 Thomson for (auto it = spectrum.MZBegin(800.0); it != spectrum.MZEnd(1000.0); ++it) { cout << it->getMZ() << endl; } // Access a peak by index cout << spectrum[1].getMZ() << " " << spectrum[1].getIntensity() << endl; // ... and many more return 0; } 新建的vs2019项目,复制学习教程上的一段代码,这个头文件包我有,这个头文件里还有别的头文件,都在Openms路径下)

  • 0

    回答

  • 8

    浏览

训练集和验证集准确率可以达到>90%但是测试集的准确率只达到了30%多,无论是textcnn还是bi-lstm+attention,模型准确率都是这样。 泛化的措施比如加l2正则,dropout,BN层,数据增强等方式都用了,但是还是这样。 数据都是一个数据集随机划分的。 有没有可能是数据训练的语句和标签的关系不大导致的,不能通过语句推出这个标签(之前的标签都是不同人打的,可能规则不一样,不准确?) 各位有没有什么思路或想法赐教下?

  • 0

    回答

  • 7

    浏览

下载好Cora数据集后该如何加载?

  • 2

    回答

  • 60

    浏览

想用matlab来实现一个回归问题,首先尝试了图片到图片的回归,我最想用mat文件实现,因为mat文件对于图片到图片还是图片到向量都是很方便的。首先,我的思路是将输入和输出得mat文件用datastore 函数读进去,然后再进行combine,生成一个训练pair,最后用于训练,训练集和验证机放在不同的文件夹里面,代码里指定即可。 下面是我的代码, inputData=fileDatastore(fullfile('A:\wtl\New\CNN3\1\'),'ReadFcn',...     @load,'FileExtensions','.mat'); targetData=fileDatastore(fullfile('A:\wtl\New\CNN3\2\'),'ReadFcn',...     @load,'FileExtensions','.mat'); trainData=combine(inputData,targetData);  %%训练数据 % validation data inputDatat=fileDatastore(fullfile('A:\wtl\New\CNN3\3\'),'ReadFcn',...     @load,'FileExtensions','.mat'); targetDatat=fileDatastore(fullfile('A:\wtl\New\CNN3\4\'),'ReadFcn',...     @load,'FileExtensions','.mat'); valData=combine(inputDatat,targetDatat);%%验证数据 %% 训练参数 Minibatchsize=4; options = trainingOptions('adam', ... 'MaxEpochs',10, ... 'MiniBatchSize',Minibatchsize, ... 'ValidationData',valData, ... 'Plots','training-progress', ... 'Verbose',false); net = trainNetwork(trainData,layers_1,options); %%训练网络 然后在我开始训练网络的时候 遇到了一个问题,就是说我的训练数据里面有NaN数据,可是我所有的mat文件都没有NaN,我的数据和官网中给的例子不同的是:我的输入和输出不能通过transform函数得到,两组数据都是独立采集的,但是又存在一些联系。 Error using trainNetwork (line 183) Invalid training data. For regression tasks, responses must be a vector, a matrix, or a 4-D array of numeric responses. Responses must not contain NaNs.   然后我又尝试了在保存mat文件的时候用isnan函数再处理一下即将要保存的数据,还是同样的问题。 最后,我又尝试了自己生成矩阵,然后保存为mat文件,矩阵很简单就是[1 2 3; 4 5 6], 所有的数据集都是这个。 但是还是存在同样的错误。 所以有没有大神遇到过同样的问题? 求解决,解决了星巴克 奶茶 现金都可以。 我有一个思路是自己写一个transform函数 函数里面说就是isnan函数,但是我尝试了自己写的transform函数 老报错。 做过的大佬帮忙解决一下,谢谢!