pytorch 中 写入class ConvNet(nn.Module): 语句之后 出现错误NameError: name 'ConvNet' is not defined 这是怎么回事?

```import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.optim as optim
import torch.nn.functional as F

import torchvision.datasets as dsets
import torchvision.transforms as transforms

import matplotlib.pyplot as plt
import numpy as np
#%matplotlib inline
image_size=28
num_classes=10
num_epochs=20
batch_size=64

train_dataset=dsets.MNIST(root='./data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset=dsets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
train_loader=torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
indices=range(len(test_dataset))
indices_val=indices[:5000]
indices_test=indices[5000:]

sampler_val=torch.utils.data.sampler.SubsetRandomSampler(indices_val)
sampler_test=torch.utils.data.sampler.SubsetRandomSampler(indices_test)
validation_loader=torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False,
sampler=sampler_val
)
test_loader=torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False,
sampler=sampler_test
)
idx=110 #随机选取的
muteimg=train_dataset[idx][0].numpy()
plt.imshow(muteimg[0,...])
plt.show()
print('标签是:',train_dataset[idx][1])

depth=[4,8]
class ConvNet(nn.Module):
def init(self):
super(ConvNet,self).__init__()
self.conv1=nn.Conv2d(1,4,5,padding=2)
self.pool=nn.MaxPool2d(2,2)
self.conv2=nn.Conv2d(depth[0],depth[1],5,padding=2)
self.fc1=nn.Linear(image_size//4*image_size//4*depth[1],512)
self.fc2=nn.Linear(512,num_classes)
def forward(self, x):
x=self.conv1(x)
x=F.relu(x)
x=self.pool(x)
x=self.conv2(x)
x=F.relu(x)
x=self.pool(x)
x=x.view(-1,image_size//4*image_size//4*depth[1])
x=F.relu(self.fc1(x))
x=F.dropout(x,training=self.training)
x=self.fc2(x)
x=F.log_softmax(x,dim=1)
return x
def retrieve_features(self,x):
feature_map1=F.relu(self.conv1(x))
x=self.pool(feature_map1)
feature_map2=F.relu(self.conv2(x))
return (feature_map1,feature_map2)

net=ConvNet()
criterion=nn.CrossEntropyLoss()
optimizer=optim.SGD(net.parameters(),lr=0.001,momentum=0.9)
record=[]
weights=[]
for epoch in range(num_epochs):
    train_rights=[]
    for batch_idx,(data,target) in enumerate (train_loader):
        data,target=Variable(data),Variable(target)
        net.train()
        output=net(data)
        loss=criterion(output,target)
        optimizer.zero_grad()
        optimizer.step()
        right=rightness(output,target)
        train_rights.append(right)

        if batch_idx % 100 ==0:
            net.eval()
            val_rights=[]
            for(data,target) in validation_loader:
                data,target=Variable(data),Variable(target)
                output=net(data)
                right=rightness(output,target)
                val_rights.append(right)
                train_r=(sum([tup[o] for tup in train_rights]),sum([tup[1] for tup in train_rights]))
                val_r=(sum([tup[0] for tup in val_rights]),sum([tup[1] for tup in val_rights]))

                record.append((100 - 100.*train_r[0]/train_r[1],100-100.*val_r[0]/val_r[1]))
                weights.append([net.conv1.weight.data.clone(),net.conv1.bias.data.clone(),
                                net.conv2.weight.data.clone(),net.conv2.bias.data.clone()])

net.eval()
vals=[]

for dara,target in test_loader:
    data,targrt=Variable(data,volatile=True),Variable(target)
    output = net(data)
    val=rightness(output,target)
    vals.append(val)

rights=(sum([tup[0] for tup in vals]),sum([tup[1] for tup in vals]))
rights_rate=1.0*rights[0]/rights[1]
right_rate

plt.figure(figsize=(10,7))
plt.plot(record)
plt.xlabel('Steps')
plt.ylabel('Error rate')

```pytorch 中  写入class ConvNet(nn.Module): 语句之后  出现错误NameError: name 'ConvNet' is not defined 这是怎么回事

1个回答

看看是不是别的地方报错,特别是有重名的包、变量、对象等,具体要看代码

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
其他相关推荐
Linux+pytorch下运行报错RuntimeError: PyTorch was compiled without NumPy support

我在尝试实现Github上开源的代码[Relation-Shape-CNN](https://github.com/Yochengliu/Relation-Shape-CNN ""),运行报错RuntimeError: PyTorch was compiled without NumPy support train_cls.py:36: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. config = yaml.load(f) ************************** [workers]: 4 [num_points]: 1024 [num_classes]: 40 [batch_size]: 32 [base_lr]: 0.001 [lr_clip]: 1e-05 [lr_decay]: 0.7 [decay_step]: 21 [epochs]: 200 [weight_decay]: 0 [bn_momentum]: 0.9 [bnm_clip]: 0.01 [bn_decay]: 0.5 [evaluate]: 1 [val_freq_epoch]: 0.5 [print_freq_iter]: 20 [input_channels]: 0 [relation_prior]: 1 [checkpoint]: [save_path]: cls [data_root]: /media/lab/16DE307A392D4AED/zs ************************** /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:113: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. init(xyz_raising.weight) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:115: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(xyz_raising.bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:122: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. init(mapping_func1.weight) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:123: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. init(mapping_func2.weight) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:125: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(mapping_func1.bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:126: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(mapping_func2.bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:131: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. init(cr_mapping.weight) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pointnet2_modules.py:132: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(cr_mapping.bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pytorch_utils/pytorch_utils.py:153: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. init(self.conv_avg.weight) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pytorch_utils/pytorch_utils.py:155: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(self.conv_avg.bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pytorch_utils/pytorch_utils.py:201: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(self[0].weight, 1.0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pytorch_utils/pytorch_utils.py:202: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(self[0].bias, 0) /media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/models/../utils/pytorch_utils/pytorch_utils.py:400: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_. nn.init.constant(fc.bias, 0) Traceback (most recent call last): File "train_cls.py", line 167, in <module> main() File "train_cls.py", line 91, in main train(train_dataloader, test_dataloader, model, criterion, optimizer, lr_scheduler, bnm_scheduler, args, num_batch) File "train_cls.py", line 101, in train for i, data in enumerate(train_dataloader, 0): File "/home/lab/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 336, in __next__ return self._process_next_batch(batch) File "/home/lab/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 357, in _process_next_batch raise batch.exc_type(batch.exc_msg) RuntimeError: Traceback (most recent call last): File "/home/lab/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 106, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/home/lab/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 106, in <listcomp> samples = collate_fn([dataset[i] for i in batch_indices]) File "/media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN/data/ModelNet40Loader.py", line 55, in __getitem__ label = torch.from_numpy(self.labels[idx]).type(torch.LongTensor) RuntimeError: PyTorch was compiled without NumPy support 请大神解答!!!

Pagodabox - 在/data/vendor/laravel/framework/src/Illuminate/Redis/Database.php:62中找不到“Predis \ Client”类

<div class="post-text" itemprop="text"> <p>I have a website hosted on pagodabox and I keep getting the following error.</p> <blockquote> <p>Class 'Predis\Client' not found in /data/vendor/laravel/framework/src/Illuminate/Redis/Database.php:62I have already checked <a href="https://stackoverflow.com/questions/34865064/i-m-getting-error-class-predis-client-not-found-in-laravel-5-2">this question</a>, but my problem is specifically related to the pagodabox host. I have contacted support, no response from them and I am thinking i;m not gtoing to hear back from theb since this has tro do with the app itself.</p> </blockquote> <p>I tried updating and recompiling the build on the server to run the updated composer.json file with predis/predis, but this didn't do anything. i followed the instructions in the answer above, but it's not working on my server. Any ideas?</p> </div>

PyTorch-YOLOv3 训练报错 如何解决?

报错信息 C:/w/1/s/windows/pytorch/aten/src\ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. Traceback (most recent call last): File "train.py", line 141, in <module> logger.list_of_scalars_summary(tensorboard_log, batches_done) File "F:\StuPy\PyTorch-YOLOv3\utils\logger.py", line 17, in list_of_scalars_summary summary = tf.Summary(value=[tf.Summary.Value(tag=tag, simple_value=value) for tag, value in tag_value_pairs]) AttributeError: module 'tensorflow' has no attribute 'Summary' 已经安装插件版本 tensorboard 2.0.0 tensorflow 2.0.0 tensorflow-estimator 2.0.1 termcolor 1.1.0 terminaltables 3.1.0 torch 1.3.1 torchvision 0.4.1 tqdm 4.41.1 urllib3 1.25.7 Werkzeug 0.16.0 wheel 0.33.6 wrapt 1.11.2

torch7中用toch.load()加载文件报错

trainset=torch.load(' cifar10-train.t7’) 报错如下: cannot open <cifar10-trainset> in mode r at /tmp/luarocks_torch-scm-1-4188/torch7/lib/TH/THDiskFile.c:649 stack traceback: [C]: at 0xb70c4ac0 [C]: in function 'DiskFile' /home/panting/torch/install/share/lua/5.1/torch/File.lua:405: in function 'load' [string "trainset=th.load('cifar10-trainset')..."]:1: in main chunk [C]: in function 'xpcall' /home/panting/torch/install/share/lua/5.1/itorch/main.lua:209: in function </home/panting/torch/install/share/lua/5.1/itorch/main.lua:173> /home/panting/torch/install/share/lua/5.1/lzmq/poller.lua:75: in function 'poll' ...e/panting/torch/install/share/lua/5.1/lzmq/impl/loop.lua:307: in function 'poll' ...e/panting/torch/install/share/lua/5.1/lzmq/impl/loop.lua:325: in function 'sleep_ex' ...e/panting/torch/install/share/lua/5.1/lzmq/impl/loop.lua:370: in function 'start' /home/panting/torch/install/share/lua/5.1/itorch/main.lua:381: in main chunk [C]: in function 'require' (command line):1: in main chunk [C]: at 0x0804d9f0 有朋友遇到过类似的问题吗?

python执行代码就出错can't find '__main__' module in '',请大神解决下谢谢!

一执行代码就是这个错误,新手不知道是什么问题:![图片说明](https://img-ask.csdn.net/upload/201910/15/1571138271_427078.png)![图片说明](https://img-ask.csdn.net/upload/201910/15/1571138282_869577.png) C:\Users\Administrator\AppData\Local\Programs\Python\Python37\python.exe: can't find '__main__' module in ''

使用pytorch的dataloader时报错:RuntimeError: DataLoader worker (pid(s) 1004, 4680) exited unexpectedly

运行的是这一段代码,spyder老报错 RuntimeError: DataLoader worker (pid(s) 1004, 4680) exited unexpectedly 奇怪的是,同样的代码我在jupyter notebook里就能正常运行。 请问该如何解决 ``` import torch import torch.utils.data as Data import torch.nn.functional as F from torch.autograd import Variable import matplotlib.pyplot as plt torch.manual_seed(1) # 设定随机数种子 # 定义超参数 LR = 0.01 # 学习率 BATCH_SIZE = 32 # 批大小 EPOCH = 12 # 迭代次数 x = torch.unsqueeze(torch.linspace(-1, 1, 1000), dim=1) y = x.pow(2) + 0.1*torch.normal(torch.zeros(*x.size())) #plt.scatter(x.numpy(), y.numpy()) #plt.show() # 将数据转换为torch的dataset格式 torch_dataset = Data.TensorDataset(x, y) # 将torch_dataset置入Dataloader中 loader = Data.DataLoader(dataset=torch_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2) class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.hidden = torch.nn.Linear(1, 20) self.predict = torch.nn.Linear(20, 1) def forward(self, x): x = F.relu(self.hidden(x)) x = self.predict(x) return x # 为每个优化器创建一个Net net_SGD = Net() net_Momentum = Net() net_RMSprop = Net() net_Adam = Net() nets = [net_SGD, net_Momentum, net_RMSprop, net_Adam] # 初始化优化器 opt_SGD = torch.optim.SGD(net_SGD.parameters(), lr=LR) opt_Momentum = torch.optim.SGD(net_Momentum.parameters(), lr=LR, momentum=0.8) opt_RMSprop = torch.optim.RMSprop(net_RMSprop.parameters(), lr=LR, alpha=0.9) opt_Adam = torch.optim.Adam(net_Adam.parameters(), lr=LR, betas=(0.9, 0.99)) optimizers = [opt_SGD, opt_Momentum, opt_RMSprop, opt_Adam] # 定义损失函数 loss_function = torch.nn.MSELoss() losses_history = [[], [], [], []] # 记录training时不同神经网络的loss值 for epoch in range(EPOCH): print('Epoch:', epoch + 1, 'Training...') for step, (batch_x, batch_y) in enumerate(loader): b_x = Variable(batch_x) b_y = Variable(batch_y) for net, opt, l_his in zip(nets, optimizers, losses_history): output = net(b_x) loss = loss_function(output, b_y) opt.zero_grad() loss.backward() opt.step() l_his.append(loss.item()) labels = ['SGD', 'Momentum', 'RMSprop', 'Adam'] for i, l_his in enumerate(losses_history): plt.plot(l_his, label=labels[i]) plt.legend(loc='best') plt.xlabel('Steps') plt.ylabel('Loss') plt.ylim((0, 0.2)) plt.show() ```

pytorch 的Cross Entropy Loss 输入怎么填?

以识别一个四位数的验证码为例,批次取为100,标签用one_hot 表示,则标签的size为[100,4,10],input也为[100,4,10],请问loss用torch.nn.CrossEntropyLoss时,输入的input和target分别应为多少? 另外,用其他几种损失函数时,以四位验证码为例,输入该各是多少?

求助:CUDA的RuntimeError:cuda runtime error (30)

``` from torchvision import models use_gpu = torch.cuda.is_available() model_ft = models.resnet18(pretrained = True) if use_gpu: model_ft = model_ft.cuda() ``` 运行后报错: ``` RuntimeError Traceback (most recent call last) <ipython-input-6-87942ec8aa13> in <module>() 11 model_ft.fc = nn.Linear(num_ftrs , 5)#############################2 12 if use_gpu: ---> 13 model_ft = model_ft.cuda() 14 #criterion = nn.MultiMarginLoss() 15 criterion = nn.CrossEntropyLoss() ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in cuda(self, device) 258 Module: self 259 """ --> 260 return self._apply(lambda t: t.cuda(device)) 261 262 def cpu(self): ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn) 185 def _apply(self, fn): 186 for module in self.children(): --> 187 module._apply(fn) 188 189 for param in self._parameters.values(): ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _apply(self, fn) 191 # Tensors stored in modules are graph leaves, and we don't 192 # want to create copy nodes, so we have to unpack the data. --> 193 param.data = fn(param.data) 194 if param._grad is not None: 195 param._grad.data = fn(param._grad.data) ~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in <lambda>(t) 258 Module: self 259 """ --> 260 return self._apply(lambda t: t.cuda(device)) 261 262 def cpu(self): ~\Anaconda3\lib\site-packages\torch\cuda\__init__.py in _lazy_init() 160 "Cannot re-initialize CUDA in forked subprocess. " + msg) 161 _check_driver() --> 162 torch._C._cuda_init() 163 _cudart = _load_cudart() 164 _cudart.cudaGetErrorName.restype = ctypes.c_char_p RuntimeError: cuda runtime error (30) : unknown error at ..\aten\src\THC\THCGeneral.cpp:87 ``` Win10+Anaconda5.0.1+python3.6.3 Cuda driver version 10.1 Cuda Toolkit 10.1 pytorch 1.0.1 显卡GeForce GTX 1050 Ti nvidia驱动版本430.39 求助啊!!!

Pytorch运行错误:RuntimeError: error executing torch_shm_manager

执行一个python训练文件时,报错如下,请问是什么原因呀? 在mac上执行的,cpu方式,不知道是不是资源问题? File "/Users/xxx/anaconda3/envs/deepkeTest/lib/python3.7/multiprocessing/queues.py", line 236, in _feed obj = _ForkingPickler.dumps(obj) File "/Users/xxx/anaconda3/envs/deepkeTest/lib/python3.7/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) File "/Users/xxx/anaconda3/envs/deepkeTest/lib/python3.7/site-packages/torch/multiprocessing/reductions.py", line 324, in reduce_storage metadata = storage._share_filename_() RuntimeError: error executing torch_shm_manager at "/Users/xxx/anaconda3/envs/deepkeTest/lib/python3.7/site-packages/torch/bin/torch_shm_manager" at ../torch/lib/libshm/core.cpp:99

Linux+pytorch下运行报错ModuleNotFoundError: No module named '_ext.pointnet2._pointnet2'

我在尝试实现Github上开源的代码[Relation-Shape-CNN](https://github.com/Yochengliu/Relation-Shape-CNN ""),运行报错ModuleNotFoundError: No module named '_ext.pointnet2._pointnet2'。 Traceback (most recent call last): File "train_cls.py", line 10, in <module> from models import RSCNN_SSN_Cls as RSCNN_SSN File "/media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN-master/models/__init__.py", line 1, in <module> from .rscnn_ssn_cls import RSCNN_SSN as RSCNN_SSN_Cls File "/media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN-master/models/rscnn_ssn_cls.py", line 9, in <module> from pointnet2_modules import PointnetSAModule, PointnetSAModuleMSG File "/media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN-master/models/../utils/pointnet2_modules.py", line 5, in <module> import pointnet2_utils File "/media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN-master/models/../utils/pointnet2_utils.py", line 11, in <module> from _ext import pointnet2 File "/media/lab/16DE307A392D4AED/zs/Relation-Shape-CNN-master/models/../utils/_ext/pointnet2/__init__.py", line 3, in <module> from ._pointnet2 import lib as _lib, ffi as _ffi ModuleNotFoundError: No module named '_ext.pointnet2._pointnet2' 请哪位大神可以指导一下,非常感谢!!!

pytorch自定义loss,如何进行后向传播loss.backward()?

之前loss用自带的MSE,这样写的 ``` criterion = nn.MSELoss(size_average=False).cuda() ... loss = criterion(output, target) loss.backward() ``` 这样是没有问题的 *** 现在需要用自定义loss函数newLoss,因为要逐个像素进行loss运算(算法需要) ``` #this is in model.py class newLoss(nn.Module): def __init__(self): super(newLoss, self).__init__() def forward(self, output, gt): loss = 0 for row_out, row_gt : for pixel_out, pixel_gt : loss += something pixelwise return loss # this is in train.py newloss = newLoss() loss = newloss(output,gt) ``` 这样计算出来的loss是float类型的,下面的代码会报 ''AttributeError: 'float' object has no attribute 'backward'' 的错 ---- 我现在的做法是:把newloss数值加到原来的MSE类型loss上: ``` criterion = nn.MSELoss(size_average=False).cuda() ... loss = criterion(output, target) newloss= newLoss() loss += newloss(output,gt) loss.backward() ``` 这样写我新加的newloss在后向传播时能生效吗?

pytorch训练LSTM模型的代码疑问

原博文链接地址:https://blog.csdn.net/Sebastien23/article/details/80574918 其中有不少代码完全看不太懂,想来这里求教下各位大神~~ ``` class Sequence(nn.Module): def __init__(self): super(Sequence,self).__init__() self.lstm1 = nn.LSTMCell(1,51) self.lstm2 = nn.LSTMCell(51,51) self.linear = nn.Linear(51,1) #上面三行代码是设置网络结构吧?为什么用的是LSTMCell,而不是LSTM?? def forward(self,inputs,future= 0): #这里的前向传播名称必须是forward,而不能随意更改??因为后面的模型调用过程中,并没有看到该方法的实现 outputs = [] h_t = torch.zeros(inputs.size(0),51) c_t = torch.zeros(inputs.size(0),51) h_t2 = torch.zeros(inputs.size(0),51) c_t2 = torch.zeros(inputs.size(0),51) #下面的代码中,LSTM的原理是要求三个输入:前一层的细胞状态、隐藏层状态和当前层的数据输入。这里却只有2个输入?? for i,input_t in enumerate(inputs.chunk(inputs.size(1),dim =1)): h_t,c_t = self.lstm1(input_t,(h_t,c_t)) h_t2,c_t2 = self.lstm2(h_t,(h_t2,c_t2)) output = self.linear(h_t2) outputs +=[output] for i in range(future): h_t,c_t = self.lstm1(output,(h_t,c_t)) h_t2,c_t2 = self.lstm2(h_t,(h_t2,c_t2)) output = self.linear(h_t2) outputs +=[output] #下面将所有的输出在第一维上相拼接,并剪除维度为2的数据??目的是什么? outputs = torch.stack(outputs,1).squeeze(2) return outputs ```

pytorch RuntimeError: already started

在visual studio code 中调试pytorch代码(debug)跳出错误 RuntimeError: already started 跑的是pytorch官方[例子](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html "") 当单步执行到 ``` inputs, classes = next(iter(dataloaders['train'])) ``` 时候出错,错误提示如下 ``` E00012.128: Exception escaped from start_client Traceback (most recent call last): File "/home/zeng/.vscode/extensions/ms-python.python-2019.3.6215/pythonFiles/lib/python/ptvsd/log.py", line 110, in g return f(*args, **kwargs) File "/home/zeng/.vscode/extensions/ms-python.python-2019.3.6215/pythonFiles/lib/python/ptvsd/pydevd_hooks.py", line 74, in start_client sock, start_session = daemon.start_client((host, port)) File "/home/zeng/.vscode/extensions/ms-python.python-2019.3.6215/pythonFiles/lib/python/ptvsd/daemon.py", line 214, in start_client with self.started(): File "/home/zeng/anaconda3/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/home/zeng/.vscode/extensions/ms-python.python-2019.3.6215/pythonFiles/lib/python/ptvsd/daemon.py", line 110, in started self.start() File "/home/zeng/.vscode/extensions/ms-python.python-2019.3.6215/pythonFiles/lib/python/ptvsd/daemon.py", line 145, in start raise RuntimeError('already started') RuntimeError: already started ``` 请教这是什么情况呀?

pytorch自定义loss函数

大家好,在实现自定义的语义分割的loss函数的时候,遇到了问题,请大家帮忙一下, 这个自定义的loss函数的做法是,根据真实label(batchsize,h,w)的每个pixel的对应的class值,在网络的输出的预测值(batch-size,num-class,h,w)中,选出class对应的那个预测值,得到的就是真实label的每个pixel的class对应的预测值(batchsize,h,w),现在我自己按照我下面的方式想实现上述的目的,但是在pytorh中的loss函数,想要能够反向传播就必须所有的值都是Variable,现在发现的问题就在pytorch的tensor中的flatten函数会有问题,想问问大家有没有什么方式能够在tensor的方式下实现。

module 'torch.nn' has no attribute 'linear'。

我又来请教一个问题勒,我在调用pytorch的init模块的时候,报错说其没有linear的属性,我conda list查看了我创建环境里面的torch,是1.4.0的版本,应该是有那个属性的,但是我看的有人的解释是因为python3对import的路径规划有改变,需要将~/anaconda3/lib/python3.6/site-packages/torch/nn/init.py中的代码增加一行:from .init import * 我的问题是我并没有在我的lib文件里面找到他说的文件,现在我该怎么办呢,请大佬解答!!! ![图片说明](https://img-ask.csdn.net/upload/202003/18/1584547107_855266.png) ![图片说明](https://img-ask.csdn.net/upload/202003/19/1584547393_558004.png)

pytorch lstmcell方法转化成keras或者tensorflow

pytorch self.att_lstm = nn.LSTMCell(1536, 512) self.lang_lstm = nn.LSTMCell(1024, 512) 请问上面的如何转成同等的keras或者tensorflow

pytorch框架运行GAN时报错

电脑安装的是CUDA8.0 在运行的时候报错RuntimeError: cuda runtime error (8) : invalid device function at C:/ProgramData/Miniconda3/conda-bld/pytorch_1533094064887/work/aten/src/THC/THCTensorCopy.cu:206 在网上没有找到一样的错误请大神帮忙解答一下 感激不尽

关于pytorch使用多张显卡的问题

问题描述:同一段代码,使用单显卡时没有问题,使用多张显卡时出现问题: ``` Traceback (most recent call last): File "trainer.py", line 370, in <module> trainer.train() File "trainer.py", line 263, in train self.x_tilde = self.G(self.z) File "G:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "G:\anaconda\lib\site-packages\torch\nn\parallel\data_parallel.py", line 152, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "G:\anaconda\lib\site-packages\torch\nn\parallel\data_parallel.py", line 162, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "G:\anaconda\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 85, in parallel_apply output.reraise() File "G:\anaconda\lib\site-packages\torch\_utils.py", line 394, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 1 on device 1. Original Traceback (most recent call last): File "G:\anaconda\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "G:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "E:\Work_place\pggan-pytorch-master的副本\network.py", line 181, in forward x = self.model(x.view(x.size(0), -1, 1, 1)) File "G:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "G:\anaconda\lib\site-packages\torch\nn\modules\container.py", line 100, in forward input = module(input) File "G:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "G:\anaconda\lib\site-packages\torch\nn\modules\container.py", line 100, in forward input = module(input) File "G:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "E:\Work_place\pggan-pytorch-master的副本\custom_layers.py", line 113, in forward x = self.conv(x.mul(self.scale)) File "G:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "G:\anaconda\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "G:\anaconda\lib\site-packages\torch\nn\modules\conv.py", line 342, in conv2d_forward self.padding, self.dilation, self.groups) RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM ``` 双显卡的型号为: 0号显卡GTX1660, 1号显卡GTX1060 两张显卡都是6G版本。 不知道这是哪里出问题了,求各路大神指点。

error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

在安装Openstack时,提示如下错误,不知到如何解决,还望大神帮忙:  In file included from /usr/include/openssl/cms.h:16:0, 2018-11-22 16:33:22.028 |                      from build/temp.linux-x86_64-2.7/_openssl.c:485: 2018-11-22 16:33:22.028 |     /usr/include/openssl/x509.h:552:6: note: expected 'const X509_ALGOR ** {aka const struct X509_algor_st **}' but argument is of type 'X509_ALGOR ** {aka struct X509_algor_st **}' 2018-11-22 16:33:22.028 |      void X509_get0_signature(const ASN1_BIT_STRING **psig, 2018-11-22 16:33:22.028 |           ^~~~~~~~~~~~~~~~~~~ 2018-11-22 16:33:22.028 |     At top level: 2018-11-22 16:33:22.029 |     build/temp.linux-x86_64-2.7/_openssl.c:3492:13: warning: '_ssl_thread_locking_function' defined but not used [-Wunused-function] 2018-11-22 16:33:22.029 |      static void _ssl_thread_locking_function(int mode, int n, const char *file, 2018-11-22 16:33:22.029 |                  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2018-11-22 16:33:22.029 |     error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 2018-11-22 16:33:22.029 | 2018-11-22 16:33:22.029 |     ---------------------------------------- 2018-11-22 16:33:22.070 | Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-dDyHZi/cryptography/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-VeABSz-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-dDyHZi/cryptography/ 2018-11-22 16:33:22.097 | You are using pip version 9.0.3, however version 18.1 is available. 2018-11-22 16:33:22.097 | You should consider upgrading via the 'pip install --upgrade pip' command.

在中国程序员是青春饭吗?

今年,我也32了 ,为了不给大家误导,咨询了猎头、圈内好友,以及年过35岁的几位老程序员……舍了老脸去揭人家伤疤……希望能给大家以帮助,记得帮我点赞哦。 目录: 你以为的人生 一次又一次的伤害 猎头界的真相 如何应对互联网行业的「中年危机」 一、你以为的人生 刚入行时,拿着傲人的工资,想着好好干,以为我们的人生是这样的: 等真到了那一天,你会发现,你的人生很可能是这样的: ...

程序员请照顾好自己,周末病魔差点一套带走我。

程序员在一个周末的时间,得了重病,差点当场去世,还好及时挽救回来了。

Java基础知识面试题(2020最新版)

文章目录Java概述何为编程什么是Javajdk1.5之后的三大版本JVM、JRE和JDK的关系什么是跨平台性?原理是什么Java语言有哪些特点什么是字节码?采用字节码的最大好处是什么什么是Java程序的主类?应用程序和小程序的主类有何不同?Java应用程序与小程序之间有那些差别?Java和C++的区别Oracle JDK 和 OpenJDK 的对比基础语法数据类型Java有哪些数据类型switc...

和黑客斗争的 6 天!

互联网公司工作,很难避免不和黑客们打交道,我呆过的两家互联网公司,几乎每月每天每分钟都有黑客在公司网站上扫描。有的是寻找 Sql 注入的缺口,有的是寻找线上服务器可能存在的漏洞,大部分都...

Intellij IDEA 实用插件安利

1. 前言从2020 年 JVM 生态报告解读 可以看出Intellij IDEA 目前已经稳坐 Java IDE 头把交椅。而且统计得出付费用户已经超过了八成(国外统计)。IDEA 的...

搜狗输入法也在挑战国人的智商!

故事总是一个接着一个到来...上周写完《鲁大师已经彻底沦为一款垃圾流氓软件!》这篇文章之后,鲁大师的市场工作人员就找到了我,希望把这篇文章删除掉。经过一番沟通我先把这篇文章从公号中删除了...

总结了 150 余个神奇网站,你不来瞅瞅吗?

原博客再更新,可能就没了,之后将持续更新本篇博客。

副业收入是我做程序媛的3倍,工作外的B面人生是怎样的?

提到“程序员”,多数人脑海里首先想到的大约是:为人木讷、薪水超高、工作枯燥…… 然而,当离开工作岗位,撕去层层标签,脱下“程序员”这身外套,有的人生动又有趣,马上展现出了完全不同的A/B面人生! 不论是简单的爱好,还是正经的副业,他们都干得同样出色。偶尔,还能和程序员的特质结合,产生奇妙的“化学反应”。 @Charlotte:平日素颜示人,周末美妆博主 大家都以为程序媛也个个不修边幅,但我们也许...

MySQL数据库面试题(2020最新版)

文章目录数据库基础知识为什么要使用数据库什么是SQL?什么是MySQL?数据库三大范式是什么mysql有关权限的表都有哪几个MySQL的binlog有有几种录入格式?分别有什么区别?数据类型mysql有哪些数据类型引擎MySQL存储引擎MyISAM与InnoDB区别MyISAM索引与InnoDB索引的区别?InnoDB引擎的4大特性存储引擎选择索引什么是索引?索引有哪些优缺点?索引使用场景(重点)...

如果你是老板,你会不会踢了这样的员工?

有个好朋友ZS,是技术总监,昨天问我:“有一个老下属,跟了我很多年,做事勤勤恳恳,主动性也很好。但随着公司的发展,他的进步速度,跟不上团队的步伐了,有点...

我入职阿里后,才知道原来简历这么写

私下里,有不少读者问我:“二哥,如何才能写出一份专业的技术简历呢?我总感觉自己写的简历太烂了,所以投了无数份,都石沉大海了。”说实话,我自己好多年没有写过简历了,但我认识的一个同行,他在阿里,给我说了一些他当年写简历的方法论,我感觉太牛逼了,实在是忍不住,就分享了出来,希望能够帮助到你。 01、简历的本质 作为简历的撰写者,你必须要搞清楚一点,简历的本质是什么,它就是为了来销售你的价值主张的。往深...

魂迁光刻,梦绕芯片,中芯国际终获ASML大型光刻机

据羊城晚报报道,近日中芯国际从荷兰进口的一台大型光刻机,顺利通过深圳出口加工区场站两道闸口进入厂区,中芯国际发表公告称该光刻机并非此前盛传的EUV光刻机,主要用于企业复工复产后的生产线扩容。 我们知道EUV主要用于7nm及以下制程的芯片制造,光刻机作为集成电路制造中最关键的设备,对芯片制作工艺有着决定性的影响,被誉为“超精密制造技术皇冠上的明珠”,根据之前中芯国际的公报,目...

优雅的替换if-else语句

场景 日常开发,if-else语句写的不少吧??当逻辑分支非常多的时候,if-else套了一层又一层,虽然业务功能倒是实现了,但是看起来是真的很不优雅,尤其是对于我这种有强迫症的程序"猿",看到这么多if-else,脑袋瓜子就嗡嗡的,总想着解锁新姿势:干掉过多的if-else!!!本文将介绍三板斧手段: 优先判断条件,条件不满足的,逻辑及时中断返回; 采用策略模式+工厂模式; 结合注解,锦...

离职半年了,老东家又发 offer,回不回?

有小伙伴问松哥这个问题,他在上海某公司,在离职了几个月后,前公司的领导联系到他,希望他能够返聘回去,他很纠结要不要回去? 俗话说好马不吃回头草,但是这个小伙伴既然感到纠结了,我觉得至少说明了两个问题:1.曾经的公司还不错;2.现在的日子也不是很如意。否则应该就不会纠结了。 老实说,松哥之前也有过类似的经历,今天就来和小伙伴们聊聊回头草到底吃不吃。 首先一个基本观点,就是离职了也没必要和老东家弄的苦...

2020阿里全球数学大赛:3万名高手、4道题、2天2夜未交卷

阿里巴巴全球数学竞赛( Alibaba Global Mathematics Competition)由马云发起,由中国科学技术协会、阿里巴巴基金会、阿里巴巴达摩院共同举办。大赛不设报名门槛,全世界爱好数学的人都可参与,不论是否出身数学专业、是否投身数学研究。 2020年阿里巴巴达摩院邀请北京大学、剑桥大学、浙江大学等高校的顶尖数学教师组建了出题组。中科院院士、美国艺术与科学院院士、北京国际数学...

为什么你不想学习?只想玩?人是如何一步一步废掉的

不知道是不是只有我这样子,还是你们也有过类似的经历。 上学的时候总有很多光辉历史,学年名列前茅,或者单科目大佬,但是虽然慢慢地长大了,你开始懈怠了,开始废掉了。。。 什么?你说不知道具体的情况是怎么样的? 我来告诉你: 你常常潜意识里或者心理觉得,自己真正的生活或者奋斗还没有开始。总是幻想着自己还拥有大把时间,还有无限的可能,自己还能逆风翻盘,只不是自己还没开始罢了,自己以后肯定会变得特别厉害...

百度工程师,获利10万,判刑3年!

所有一夜暴富的方法都写在刑法中,但总有人心存侥幸。这些年互联网犯罪高发,一些工程师高技术犯罪更是引发关注。这两天,一个百度运维工程师的案例传遍朋友圈。1...

程序员为什么千万不要瞎努力?

本文作者用对比非常鲜明的两个开发团队的故事,讲解了敏捷开发之道 —— 如果你的团队缺乏统一标准的环境,那么即使勤劳努力,不仅会极其耗时而且成果甚微,使用...

为什么程序员做外包会被瞧不起?

二哥,有个事想询问下您的意见,您觉得应届生值得去外包吗?公司虽然挺大的,中xx,但待遇感觉挺低,马上要报到,挺纠结的。

当HR压你价,说你只值7K,你该怎么回答?

当HR压你价,说你只值7K时,你可以流畅地回答,记住,是流畅,不能犹豫。 礼貌地说:“7K是吗?了解了。嗯~其实我对贵司的面试官印象很好。只不过,现在我的手头上已经有一份11K的offer。来面试,主要也是自己对贵司挺有兴趣的,所以过来看看……”(未完) 这段话主要是陪HR互诈的同时,从公司兴趣,公司职员印象上,都给予对方正面的肯定,既能提升HR的好感度,又能让谈判气氛融洽,为后面的发挥留足空间。...

面试:第十六章:Java中级开发

HashMap底层实现原理,红黑树,B+树,B树的结构原理 Spring的AOP和IOC是什么?它们常见的使用场景有哪些?Spring事务,事务的属性,传播行为,数据库隔离级别 Spring和SpringMVC,MyBatis以及SpringBoot的注解分别有哪些?SpringMVC的工作原理,SpringBoot框架的优点,MyBatis框架的优点 SpringCould组件有哪些,他们...

面试阿里p7,被按在地上摩擦,鬼知道我经历了什么?

面试阿里p7被问到的问题(当时我只知道第一个):@Conditional是做什么的?@Conditional多个条件是什么逻辑关系?条件判断在什么时候执...

无代码时代来临,程序员如何保住饭碗?

编程语言层出不穷,从最初的机器语言到如今2500种以上的高级语言,程序员们大呼“学到头秃”。程序员一边面临编程语言不断推陈出新,一边面临由于许多代码已存在,程序员编写新应用程序时存在重复“搬砖”的现象。 无代码/低代码编程应运而生。无代码/低代码是一种创建应用的方法,它可以让开发者使用最少的编码知识来快速开发应用程序。开发者通过图形界面中,可视化建模来组装和配置应用程序。这样一来,开发者直...

面试了一个 31 岁程序员,让我有所触动,30岁以上的程序员该何去何从?

最近面试了一个31岁8年经验的程序猿,让我有点感慨,大龄程序猿该何去何从。

大三实习生,字节跳动面经分享,已拿Offer

说实话,自己的算法,我一个不会,太难了吧

程序员垃圾简历长什么样?

已经连续五年参加大厂校招、社招的技术面试工作,简历看的不下于万份 这篇文章会用实例告诉你,什么是差的程序员简历! 疫情快要结束了,各个公司也都开始春招了,作为即将红遍大江南北的新晋UP主,那当然要为小伙伴们做点事(手动狗头)。 就在公众号里公开征简历,义务帮大家看,并一一点评。《启舰:春招在即,义务帮大家看看简历吧》 一石激起千层浪,三天收到两百多封简历。 花光了两个星期的所有空闲时...

《Oracle Java SE编程自学与面试指南》最佳学习路线图2020年最新版(进大厂必备)

正确选择比瞎努力更重要!

字节跳动面试官竟然问了我JDBC?

轻松等回家通知

面试官:你连SSO都不懂,就别来面试了

大厂竟然要考我SSO,卧槽。

实时更新:计算机编程语言排行榜—TIOBE世界编程语言排行榜(2020年6月份最新版)

内容导航: 1、TIOBE排行榜 2、总榜(2020年6月份) 3、本月前三名 3.1、C 3.2、Java 3.3、Python 4、学习路线图 5、参考地址 1、TIOBE排行榜 TIOBE排行榜是根据全世界互联网上有经验的程序员、课程和第三方厂商的数量,并使用搜索引擎(如Google、Bing、Yahoo!)以及Wikipedia、Amazon、YouTube统计出排名数据。

立即提问
相关内容推荐