排行榜

  • 用户榜
  • 标签榜
  • 冲榜分奖金

频道

最新最热悬赏待采纳 筛选
  • 2

    回答

  • 8

    浏览

我是在Windows上训练好模型后,用tensorflow.keras.models的load_model方法来加载在Windows上训练得到的模型,请问也是这样解决吗?

  • 1

    回答

  • 40

    浏览

question:       I've built a python (3.5) environment on the development board, and it's OK to run the simple tensorflow (version 1.8.0 & 1.12.1) code, but as long as I train the model, it report an error that illegal instruction, and the code is OK on the PC ! When I restore the trained model from PC, it will also report the error. I have checked many websites, but there is no similar problem. How can I solve this error?      举个简单的例子说明,代码如下: import tensorflow as tf import numpy as np x = np.float32(np.random.rand(100, 1)) y = np.dot(x, 0.5) + 0.7 b = tf.Variable(np.float32(0.3)) a = tf.Variable(np.float32(0.3)) y_value = tf.multiply(x, a) + b loss = tf.reduce_mean(tf.square(y_value - y)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) for step in range(0, 100):     print(step)     sess.run(train)     if step % 10 == 0:         print(step, sess.run(loss), sess.run(a), sess.run(b)) 在PC端运行是完全没有问题的, ,,在arm 上运行,出现: 0 Illegal instruction 也就是在sess.run(train)报错,,,,sess.run() 函数在arm上,我测试过,函数可以运行,那就是梯度下降不能正常运行;出现了非法指令的问题; arm CPU相关信息: root@EmbedSky-Board:/xzy/mix# lscpu Architecture: armv7l Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 Model name: ARMv7 Processor rev 10 (v7l) CPU max MHz: 996.0000 CPU min MHz: 792.0000 root@EmbedSky-Board:/xzy/mix# cat /proc/cpuinfo processor : 0 model name : ARMv7 Processor rev 10 (v7l) BogoMIPS : 6.00 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpd32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x2 CPU part : 0xc09 CPU revision : 10 信息补充:arm来自于TQIMX6Q,armv7, NXP i.MX6Q Cortex-A9 4x1GHz, tensorflow 来自于https://www.piwheels.org/simple/;这是问题真是实在是太奇怪了!        

  • 3

    回答

  • 7

    浏览

我的keras版本是2.4.3 在别人github里git了代码https://github.com/titu1994/DenseNet 但是我在用到 from keras.utils.conv_utils import normalize_data_format 这段代码时候显示报错cannot import name 'normalize_data_format' from 'keras.utils.conv_utils' 查了下,网上说可以将代码替换为from keras.backend.common import normalize_data_format 但是这段则显示这段代码显示报错ModuleNotFoundError: No module named 'keras.backend.common'; 'keras.backend' is not a package 迫于某些原因,我不太敢更改keras版本,我想问下各位大佬有没有什么办法找到我这个版本keras库normalize_data_format这个方法位置,关于库的相关运用我很菜,找了下自己Anaconda3\Lib\site-packages\tensorflow\keras\backend里的init文件看不出来。

  • 3

    回答

  • 32

    浏览

在做关于生成对抗样本的软件界面,但是不知道如何通过按钮或其他方式,实现生成对抗样本的程序的运行。求求大佬们救救孩子    程序是一个比较大的程序,不是简单的函数

  • 4

    回答

  • 28

    浏览

我根据视频分类之UCF-101上的CNN方法详解 - 知乎 (zhihu.com)介绍的方法下载了源代码和数据集,第一个py是移动文件代码。按此链接要求: 把数据集文件放在了data下。 其中ucfTrainTestlist文件夹是这样的: 数据集文件夹是这样的: 源代码中,我把两个文件路径改了: """ After extracting the RAR, we run this to move all the files into the appropriate train/test folders. Should only run this file once! """ import os import os.path def get_train_test_lists(version='01'): """ Using one of the train/test files (01, 02, or 03), get the filename breakdowns we'll later use to move everything. """ # Get our files based on version. test_file = 'E:/Graduation Project/UCF-101_video_classification-master/data/ucfTrainTestlist/testlist' + version + '.txt' train_file = 'E:/Graduation Project/UCF-101_video_classification-master/data/ucfTrainTestlist/trainlist' + version + '.txt' # Build the test list. with open(test_file) as fin: test_list = [row.strip() for row in list(fin)] # Build the train list. Extra step to remove the class index. with open(train_file) as fin: train_list = [row.strip() for row in list(fin)] train_list = [row.split(' ')[0] for row in train_list] # Set the groups in a dictionary. file_groups = { 'train': train_list, 'test': test_list } return file_groups def move_files(file_groups): """This assumes all of our files are currently in _this_ directory. So move them to the appropriate spot. Only needs to happen once. """ # Do each of our groups. for group, videos in file_groups.items(): # Do each of our videos. for video in videos: # Get the parts. parts = video.split('/') classname = parts[0] filename = parts[1] # Check if this class exists. if not os.path.exists(group + '/' + classname): print("Creating folder for %s/%s" % (group, classname)) os.makedirs(group + '/' + classname) # Check if we have already moved this file, or at least that it # exists to move. if not os.path.exists(filename): print("Can't find %s to move. Skipping." % (filename)) continue # Move it. dest = group + '/' + classname + '/' + filename print("Moving %s to %s" % (filename, dest)) os.rename(filename, dest) print("Done.") def main(): """ Go through each of our train/test text files and move the videos to the right place. """ # Get the videos in groups so we can move them. group_lists = get_train_test_lists() # Move the files. move_files(group_lists) if __name__ == '__main__': main() 但是运行会出现这样: 请问这是为啥?

  • 3

    回答

  • 17

    浏览

为什么用自己训练的模型,使用官方的yolov5代码,预测时能正常运行,但是结果只有图片,没有预测结果

  • 14

    回答

  • 53

    浏览

车载定位终端,要求有国六标准的加密芯片,有什么可以推荐吗?  

  • 4

    回答

  • 18

    浏览

import os import config import shutil for split in (config.TRAIN,config.VALID,config.TEST): print('[INFO] processing {} split:'.format(split)) imagePaths=os.listdir(os.path.join(config.ORIG_DATA_PATH,split)) for ele in imagePaths: if not ele.endswith('.jpg'): imagePaths.remove(ele) for imagePath in imagePaths: label=config.CLASSES[int(imagePath.split('_')[0])] dst=os.path.join(config.BASE_PATH,split,label) if not os.path.exists(dst): os.makedirs(dst) #复制图片 shutil.copy2(os.path.join(config.ORIG_DATA_PATH,config.TRAIN,imagePath), os.path.join(dst,imagePath)) print('[INFO] All is done' ) 这是卷积神经网络的训练集测试集验证集按类别进行划分,我觉得倒数第二行的代码中的config.TRAIN应该改为split,不知道对不对,请解答一下

  • 3

    回答

  • 15

    浏览

使用python实现BP神经网络或LSTM长短期记忆网络进行超短期功率预测,样本从文档中读取,输入数据需要归一化处理,UI界面展示

  • 3

    回答

  • 20

    浏览

火的纹理特征主要表现在边缘与颜色,想控制它在相同的边缘生成不同的颜色,或者在相同的颜色下生成不同的边缘,通过GAN实现一对一的翻译任务是简单,但实现一对多的翻译任务是困难的,查了很多资料都没有结果,希望大家能给我提供一些灵感。  

  • 5

    回答

  • 16

    浏览

#include <OpenMS/KERNEL/MSSpectrum.h> //这里出错了,说是无法打开openms/kernel/msspectrum.h #include<iostream> using namespace OpenMS; /这个地方也出错了,说是,名称必须是命名空间名 using namespace std; int main() { // Create spectrum MSSpectrum spectrum; //这个地方也出错,我怀疑还是头文件不对,或者命名空间 Peak1D peak; //这个地方也出错,我怀疑还是头文件不对 for (float mz = 1500.0; mz >= 500; mz -= 100.0) { peak.setMZ(mz); spectrum.push_back(peak); } // Sort the peaks according to ascending mass-to-charge ratio spectrum.sortByPosition(); // Iterate over spectrum of those peaks between 800 and 1000 Thomson for (auto it = spectrum.MZBegin(800.0); it != spectrum.MZEnd(1000.0); ++it) { cout << it->getMZ() << endl; } // Access a peak by index cout << spectrum[1].getMZ() << " " << spectrum[1].getIntensity() << endl; // ... and many more return 0; } 新建的vs2019项目,复制学习教程上的一段代码,这个头文件包我有,这个头文件里还有别的头文件,都在Openms路径下)

  • 2

    回答

  • 60

    浏览

想用matlab来实现一个回归问题,首先尝试了图片到图片的回归,我最想用mat文件实现,因为mat文件对于图片到图片还是图片到向量都是很方便的。首先,我的思路是将输入和输出得mat文件用datastore 函数读进去,然后再进行combine,生成一个训练pair,最后用于训练,训练集和验证机放在不同的文件夹里面,代码里指定即可。 下面是我的代码, inputData=fileDatastore(fullfile('A:\wtl\New\CNN3\1\'),'ReadFcn',...     @load,'FileExtensions','.mat'); targetData=fileDatastore(fullfile('A:\wtl\New\CNN3\2\'),'ReadFcn',...     @load,'FileExtensions','.mat'); trainData=combine(inputData,targetData);  %%训练数据 % validation data inputDatat=fileDatastore(fullfile('A:\wtl\New\CNN3\3\'),'ReadFcn',...     @load,'FileExtensions','.mat'); targetDatat=fileDatastore(fullfile('A:\wtl\New\CNN3\4\'),'ReadFcn',...     @load,'FileExtensions','.mat'); valData=combine(inputDatat,targetDatat);%%验证数据 %% 训练参数 Minibatchsize=4; options = trainingOptions('adam', ... 'MaxEpochs',10, ... 'MiniBatchSize',Minibatchsize, ... 'ValidationData',valData, ... 'Plots','training-progress', ... 'Verbose',false); net = trainNetwork(trainData,layers_1,options); %%训练网络 然后在我开始训练网络的时候 遇到了一个问题,就是说我的训练数据里面有NaN数据,可是我所有的mat文件都没有NaN,我的数据和官网中给的例子不同的是:我的输入和输出不能通过transform函数得到,两组数据都是独立采集的,但是又存在一些联系。 Error using trainNetwork (line 183) Invalid training data. For regression tasks, responses must be a vector, a matrix, or a 4-D array of numeric responses. Responses must not contain NaNs.   然后我又尝试了在保存mat文件的时候用isnan函数再处理一下即将要保存的数据,还是同样的问题。 最后,我又尝试了自己生成矩阵,然后保存为mat文件,矩阵很简单就是[1 2 3; 4 5 6], 所有的数据集都是这个。 但是还是存在同样的错误。 所以有没有大神遇到过同样的问题? 求解决,解决了星巴克 奶茶 现金都可以。 我有一个思路是自己写一个transform函数 函数里面说就是isnan函数,但是我尝试了自己写的transform函数 老报错。 做过的大佬帮忙解决一下,谢谢!

  • 5

    回答

  • 30

    浏览

深度学习萌新,求大佬帮忙看看这个问题。 Traceback (most recent call last):   File "D:/PYTHON/Human-Pose-Estimation-master/training/train_pose.py", line 223, in <module>     multisgd = MultiSGD(lr=base_lr, momentum=momentum, decay=0.0,   File "D:\PYTHON\Human-Pose-Estimation-master\training\optimizers.py", line 26, in __init__     self.lr = K.variable(lr, name='lr') AttributeError: can't set attribute   multisgd = MultiSGD(lr=base_lr, momentum=momentum, decay=0.0, nesterov=False, lr_mult=lr_multipliers)   def __init__(self, lr=0.01, momentum=0., decay=0., nesterov=False, lr_mult=None, **kwargs): super(MultiSGD, self).__init__(**kwargs) with K.name_scope(self.__class__.__name__): self.iterations = K.variable(0, dtype='int64', name='iterations') self.lr = K.variable(lr, name='lr') self.momentum = K.variable(momentum, name='momentum') self.decay = K.variable(decay, name='decay') self.initial_decay = decay self.nesterov = nesterov self.lr_mult = lr_mult

  • 3

    回答

  • 11

    浏览

cmd报错 (base) E:\PaddleDetection>python -u tools/train.py -c configs/ssd/ssd_mobilenet_v1_voc.yml --eval E:\anaconda\lib\site-packages\socks.py:58: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working from collections import Callable E:\anaconda\lib\site-packages\win32\lib\pywintypes.py:2: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp, sys, os E:\anaconda\lib\site-packages\pkg_resources\_vendor\pyparsing.py:943: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working collections.MutableMapping.register(ParseResults) E:\anaconda\lib\site-packages\paddle\fluid\layers\math_op_patch.py:298: UserWarning: E:\anaconda\lib\site-packages\paddle\fluid\layers\detection.py:1751 The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future. op_type, op_type, EXPRESSION_MAP[method_name])) 2021-04-27 21:33:56,009 - INFO - If regularizer of a Parameter has been set by 'fluid.ParamAttr' or 'fluid.WeightNormParamAttr' already. The Regularization[L2Decay, regularization_coeff=0.000050] in Optimizer will not take effect, and it will only be applied to other Parameters! W0427 21:33:57.019949 10288 device_context.cc:362] Please NOTE: device: 0, GPU Compute Capability: 6.1, Driver API Version: 11.0, Runtime API Version: 10.2 W0427 21:33:57.033360 10288 device_context.cc:372] device: 0, cuDNN Version: 7.6. E:\PaddleDetection\ppdet\data\reader.py:89: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working if isinstance(item, collections.Sequence) and len(item) == 0: Traceback (most recent call last): File "tools/train.py", line 399, in <module> main() File "tools/train.py", line 270, in main outs = exe.run(compiled_train_prog, fetch_list=train_values) File "E:\anaconda\lib\site-packages\paddle\fluid\executor.py", line 1110, in run six.reraise(*sys.exc_info()) File "E:\anaconda\lib\site-packages\six.py", line 703, in reraise raise value File "E:\anaconda\lib\site-packages\paddle\fluid\executor.py", line 1108, in run return_merged=return_merged) File "E:\anaconda\lib\site-packages\paddle\fluid\executor.py", line 1251, in _run_impl return_merged=return_merged) File "E:\anaconda\lib\site-packages\paddle\fluid\executor.py", line 913, in _run_parallel tensors = exe.run(fetch_var_names, return_merged)._move_to_list() OSError: (External) Cuda error(719), unspecified launch failure. [Advise: Please search for the error code(719) on website( https://docs.nvidia.com/cuda/archive/10.0/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038 ) to get Nvidia's official solution about CUDA Error.] (at D:\v2.0.2\paddle\paddle\fluid\platform\stream\cuda_stream.cc:65)  

  • 3

    回答

  • 19

    浏览

请求有采用VOC2007数据集基于PyTorch的cornernet检测算法的代码,谢谢了

  • 5

    回答

  • 38

    浏览

在运行以下代码时 model = load_model('unet_brain_mri_seg.hdf5', custom_objects={'dice_coef_loss': dice_coef_loss, 'iou': iou, 'dice_coef': dice_coef}) test_gen = train_generator(df_test, BATCH_SIZE, dict(), target_size=(im_height, im_width)) results = model.evaluate(test_gen, steps=len(df_test) / BATCH_SIZE) print("Test lost: ",results[0]) print("Test IOU: ",results[1]) print("Test Dice Coefficent: ",results[2]) 报错 TypeError Traceback (most recent call last) <ipython-input-23-9684583ed357> in <module>() 2 dict(), 3 target_size=(im_height, im_width)) ----> 4 results = model.evaluate(test_gen, steps=len(df_test) / BATCH_SIZE) 5 print("Test lost: ",results[0]) 6 print("Test IOU: ",results[1]) 9 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 975 except Exception as e: # pylint:disable=broad-except 976 if hasattr(e, "ag_error_metadata"): --> 977 raise e.ag_error_metadata.to_exception(e) 978 else: 979 raise TypeError: in user code: /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1233 test_function * return step_function(self, iterator) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1224 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:1219 run_step ** with ops.control_dependencies(_minimum_control_deps(outputs)): /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:2793 _minimum_control_deps outputs = nest.flatten(outputs, expand_composites=True) /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:341 flatten return _pywrap_utils.Flatten(structure, expand_composites) TypeError: '<' not supported between instances of 'function' and 'str'   但是在model.evaluate之前运行model.fit没有报错,具体代码如下 EPOCHS = 150 BATCH_SIZE = 32 learning_rate = 1e-4 train_generator_args = dict(rotation_range=0.2, width_shift_range=0.05, height_shift_range=0.05, shear_range=0.05, zoom_range=0.05, horizontal_flip=True, fill_mode='nearest') train_gen = train_generator(df_train, BATCH_SIZE, train_generator_args, target_size=(im_height, im_width)) test_gener = train_generator(df_val, BATCH_SIZE, dict(), target_size=(im_height, im_width)) model = unet(input_size=(im_height, im_width, 3)) decay_rate = learning_rate / EPOCHS opt = Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=None, decay=decay_rate, amsgrad=False) model.compile(optimizer=opt, loss=dice_coef_loss, metrics=["binary_accuracy", iou, dice_coef]) callbacks = [ModelCheckpoint('unet_brain_mri_seg.hdf5', verbose=1, save_best_only=True)] history = model.fit(train_gen, steps_per_epoch=len(df_train) / BATCH_SIZE, epochs=EPOCHS, callbacks=callbacks, validation_data = test_gener, validation_steps=len(df_val) / BATCH_SIZE)  

  • 3

    回答

  • 18

    浏览

if isinstance(m, nn.BatchNorm2d): m.weight.data.fill_(1) m.bias.data.zero_() 只是进行归一化,为什么要初始化参数

  • 5

    回答

  • 34

    浏览

进行epoch时提示未定义epoch NameError: name 'epoch' is not defined 进行pip install epoch后依旧无法进行

  • 1

    回答

  • 35

    浏览

求助,我现在在做一个类似二分类的算法,通过学习我获得了一个指标我这里称其为A,那么我的评判标准就是A>T,则是类别1,A<T为类别二。但是呢,我发现这个分类效果并不好,正确分类的准确度太低,于是我就开始想办法,突然我通过数学公式推导出这个指标A在计算过程中可以拆分为A=B*C(过程我就不给了),于是我想这个公式表明A是另外两个指标的乘积(B和C我也做实验了,实验证明B和C也可以做为二分类的指标,即B>T1为类别1,B<T1为类别2,C同理,但是呢B的分类准确度确比A和C好许多),那么我将它拆分一下,变成A1=k1*B+k2*C (k1和k2是参数,可以调参),最终我得出这种线性组合的方式得到的指标A1比最开始的A在分类效果上好许多,现在我想问的是这个该如何解释呢(为什么A1比A好),用什么理论解释呢?还请各路大神帮忙解答一下。

  • 3

    回答

  • 33

    浏览

if teacher_forcing_ratio == None: teacher_forcing_ratio = self.teacher_forcing_ratio is_teacher = random.random() < teacher_forcing_ratio output = Variable(trg.data[0,] if is_teacher else outputs[0,]).cuda() #最后一行出现了两次的这个 [x,] 的结构 python和机器学习都是新手上路,好像在python一般的数组和列表内都见到这种用法 附:trg.data的type好像是torch.tensor(张量什么的小白表示一脸懵)

  • 5

    回答

  • 38

    浏览

为了提高代码的使用效率和简洁性,即在进行不同图像处理操作时不必重新用imread函数读入图像,我使用了如图函数,但是却有问题,不知道该咋解决 

  • 3

    回答

  • 28

    浏览

tensorflow、cuda、cdnn的版本 tensorflow2.3.0 cuda10.1 cdnn7.6.5 nvidia版本 训练模型时的速度 训练模型时的CPU和GPU的占用情况 各种属性 我对比过CPU版本和GPU版本的训练速度,几乎一样。训练时也没有占用到GPU,反而占用的是CPU。 请问各路大神,到底是我想多了,还是我环境搭建的有问题?

  • 5

    回答

  • 26

    浏览

协同过滤中,基于物品与基于用户的混合模式思想是什么? 怎样用电影数据集来实现这一算法?

  • 3

    回答

  • 26

    浏览

使用yolov3训练模型 coco_classes.txt和voc_classes.txt在训练前就已经改好了出现了这个错误,我的文件已经训练好了,不知道是哪一步出现了问题

  • 3

    回答

  • 25

    浏览

  学习像素分割中,打算跑一个github上的项目,结果第一行就运行不了。求教怎么解决   github项目网址:https://github.com/abhismatrix1/segmentation

  • 3

    回答

  • 19

    浏览

Ubuntu上安装完jupyter后显示有python2和python3的核,但是选择了python2的核以后还是只能用python3

  • 3

    回答

  • 47

    浏览

import os import random import numpy as np from skimage import io from PIL import Image root_dir = 'D:/ISIC 2018/' # change it in your saved original data path save_dir = 'D:/CA/data/ISIC2018_Task1_npy_all/' if __name__ == '__main__': imgfile = os.path.join(root_dir, 'ISIC2018_Task1-2_Training_Input') labfile = os.path.join(root_dir, 'ISIC2018_Task1_Training_GroundTruth') filename = sorted([os.path.join(imgfile, x) for x in os.listdir(imgfile) if x.endswith('.jpg')]) random.shuffle(filename) labname = [filename[x].replace('ISIC2018_Task1-2_Training_Input', 'ISIC2018_Task1_Training_GroundTruth' ).replace('.jpg', '_segmentation.png') for x in range(len(filename))] if not os.path.isdir(save_dir): os.makedirs(save_dir+'/image') os.makedirs(save_dir+'/label') for i in range(len(filename)): fname = filename[i].rsplit('/', maxsplit=1)[-1].split('.')[0] lname = labname[i].rsplit('/', maxsplit=1)[-1].split('.')[0] image = Image.open(filename[i]) label = Image.open(labname[i]) image = image.resize((342, 256)) label = label.resize((342, 256)) image = np.array(image) label = np.array(label) images_img_filename = os.path.join(save_dir, 'image', fname) labels_img_filename = os.path.join(save_dir, 'label', lname) np.save(images_img_filename, image) np.save(labels_img_filename, label) print('Successfully saved preprocessed data')     这里使用的是python3.8,下载的官网最新的pytorch,这里只是对论文作者的代码修改了文件路径,其他的没改。

  • 3

    回答

  • 19

    浏览

在一步一步根据书本上使用nn.linear构建卷积神经网络时报错如图: 在错误代码行前面加print(x)、print(inputs)都可正常打印,但还是会报错 环境是anaconda,spyder,python 3.8