weixin_43998834 2019-10-11 10:54 采纳率: 0%
浏览 10954

使用pytorch的dataloader时报错:RuntimeError: DataLoader worker (pid(s) 1004, 4680) exited unexpectedly

运行的是这一段代码,spyder老报错
RuntimeError: DataLoader worker (pid(s) 1004, 4680) exited unexpectedly
奇怪的是,同样的代码我在jupyter notebook里就能正常运行。
请问该如何解决

import torch 

import torch.utils.data as Data 

import torch.nn.functional as F 

from torch.autograd import Variable 

import matplotlib.pyplot as plt 

torch.manual_seed(1) # 设定随机数种子 



# 定义超参数 

LR = 0.01 # 学习率 

BATCH_SIZE = 32 # 批大小 

EPOCH = 12 # 迭代次数 



x = torch.unsqueeze(torch.linspace(-1, 1, 1000), dim=1) 

y = x.pow(2) + 0.1*torch.normal(torch.zeros(*x.size())) 



#plt.scatter(x.numpy(), y.numpy()) 

#plt.show() 



# 将数据转换为torch的dataset格式 

torch_dataset = Data.TensorDataset(x, y) 

# 将torch_dataset置入Dataloader中 

loader = Data.DataLoader(dataset=torch_dataset, batch_size=BATCH_SIZE, 

             shuffle=True, num_workers=2) 



class Net(torch.nn.Module): 

  def __init__(self): 

    super(Net, self).__init__() 

    self.hidden = torch.nn.Linear(1, 20) 

    self.predict = torch.nn.Linear(20, 1) 



  def forward(self, x): 

    x = F.relu(self.hidden(x)) 

    x = self.predict(x) 

    return x 



# 为每个优化器创建一个Net 

net_SGD = Net() 

net_Momentum = Net() 

net_RMSprop = Net() 

net_Adam = Net()  

nets = [net_SGD, net_Momentum, net_RMSprop, net_Adam] 



# 初始化优化器 

opt_SGD = torch.optim.SGD(net_SGD.parameters(), lr=LR) 

opt_Momentum = torch.optim.SGD(net_Momentum.parameters(), lr=LR, momentum=0.8) 

opt_RMSprop = torch.optim.RMSprop(net_RMSprop.parameters(), lr=LR, alpha=0.9) 

opt_Adam = torch.optim.Adam(net_Adam.parameters(), lr=LR, betas=(0.9, 0.99)) 



optimizers = [opt_SGD, opt_Momentum, opt_RMSprop, opt_Adam] 



# 定义损失函数 

loss_function = torch.nn.MSELoss() 

losses_history = [[], [], [], []] # 记录training时不同神经网络的loss值 



for epoch in range(EPOCH): 

  print('Epoch:', epoch + 1, 'Training...') 

  for step, (batch_x, batch_y) in enumerate(loader): 

    b_x = Variable(batch_x) 

    b_y = Variable(batch_y) 



    for net, opt, l_his in zip(nets, optimizers, losses_history): 

      output = net(b_x) 

      loss = loss_function(output, b_y) 

      opt.zero_grad() 

      loss.backward() 

      opt.step() 

      l_his.append(loss.item()) 



labels = ['SGD', 'Momentum', 'RMSprop', 'Adam'] 



for i, l_his in enumerate(losses_history): 

  plt.plot(l_his, label=labels[i]) 

plt.legend(loc='best') 

plt.xlabel('Steps') 

plt.ylabel('Loss') 

plt.ylim((0, 0.2)) 

plt.show()
  • 写回答

3条回答 默认 最新

  • zqbnqsdsmd 2019-10-12 09:40
    关注
    评论

报告相同问题?

悬赏问题

  • ¥15 DIFY API Endpoint 问题。
  • ¥20 sub地址DHCP问题
  • ¥15 delta降尺度计算的一些细节,有偿
  • ¥15 Arduino红外遥控代码有问题
  • ¥15 数值计算离散正交多项式
  • ¥30 数值计算均差系数编程
  • ¥15 redis-full-check比较 两个集群的数据出错
  • ¥15 Matlab编程问题
  • ¥15 训练的多模态特征融合模型准确度很低怎么办
  • ¥15 kylin启动报错log4j类冲突