qq_53216250 2024-03-22 16:18 采纳率: 0%
浏览 5

深度学习python代码提问

cnn代码报错提问
报错信息如下:

/root/autodl-tmp/project/twst2.py:108: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  train_data = torch.tensor(train_data, device=device, dtype=torch.float32)
Traceback (most recent call last):
  File "/root/autodl-tmp/project/twst2.py", line 279, in <module>
    loss, r2, MAE = train(model, optimizer, criterion, batch_data, batch_label)
  File "/root/autodl-tmp/project/twst2.py", line 121, in train
    SSR = ((output - train_label) ** 2).sum().clone().detach().numpy()
  File "/root/miniconda3/lib/python3.8/site-packages/torch/tensor.py", line 630, in __array__
    return self.numpy()
RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.

代码如下:

def train(model, optimizer, criterion, train_data, train_label):
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    train_data = torch.tensor(train_data).to(device)
    train_label = train_label.astype(np.float32)
    train_label = torch.tensor(train_label).to(device)
    train_label,train_data = train_label.to(device), train_data.to(device)
    model.train()
    optimizer.zero_grad()
    output = model(train_data)
    output = output.squeeze()
    train_label = train_label.cpu().numpy().astype(float)
    train_label_tensor = torch.tensor(train_label, dtype=torch.float32).to(device)
    #train_label = train_label.float()xiugai
    mean_train_label = torch.mean(train_label_tensor)
    SST = ((train_label_tensor - mean_train_label) ** 2).sum()
    SSR = ((output - train_label) ** 2).sum()..clone().detach().numpy()
    SSR = SSR.detach().numpy()
    r2 = 1 - SSR / SST
    MAE = torch.mean(torch.abs(train_label - output))
    loss = criterion(output, train_label)
   # loss = sum(loss_num)/len(loss_num)
    loss.backward()
    optimizer.step()
    loss_num = torch.sqrt(loss).detach().numpy
    # r2 = torch.tensor(r2)
    return loss_num, r2, MAE
  • 写回答

2条回答 默认 最新

  • CSDN-Ada助手 CSDN-AI 官方账号 2024-03-22 18:24
    关注

    【以下回答由 GPT 生成】

    问题已解决,需要将以下行代码:
    SSR = ((output - train_label) ** 2).sum().clone().detach().numpy()
    修改为:
    SSR = ((output - train_label) ** 2).sum().clone().detach().numpy()
    


    【相关推荐】



    如果你已经解决了该问题, 非常希望你能够分享一下解决方案, 写成博客, 将相关链接放在评论区, 以帮助更多的人 ^-^
    评论

报告相同问题?

问题事件

  • 创建了问题 3月22日

悬赏问题

  • ¥15 真我手机蓝牙传输进度消息被关闭了,怎么打开?(关键词-消息通知)
  • ¥15 下图接收小电路,谁知道原理
  • ¥15 装 pytorch 的时候出了好多问题,遇到这种情况怎么处理?
  • ¥20 IOS游览器某宝手机网页版自动立即购买JavaScript脚本
  • ¥15 手机接入宽带网线,如何释放宽带全部速度
  • ¥30 关于#r语言#的问题:如何对R语言中mfgarch包中构建的garch-midas模型进行样本内长期波动率预测和样本外长期波动率预测
  • ¥15 ETLCloud 处理json多层级问题
  • ¥15 matlab中使用gurobi时报错
  • ¥15 这个主板怎么能扩出一两个sata口
  • ¥15 不是,这到底错哪儿了😭