qq_53216250 2024-03-21 16:08 采纳率: 0%
浏览 4

CNN报错提问,求解答

代码报错如下

Traceback (most recent call last):
  File "/root/autodl-tmp/project/twst2.py", line 274, in <module>
    loss, r2, MAE = train(model, optimizer, criterion, batch_data, batch_label)
  File "/root/autodl-tmp/project/twst2.py", line 117, in train
    SSR = ((output - train_label) ** 2).sum().cpu().detach().numpy()
  File "/root/miniconda3/lib/python3.8/site-packages/torch/tensor.py", line 630, in __array__
    return self.numpy()
RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.

相关代码如下:

def train(model, optimizer, criterion, train_data, train_label):
    # train_label,train_data = train_label.to(device), train_data.to(device)
    model.train()
    optimizer.zero_grad()
    output = model(train_data)
    output = output.squeeze()
    train_label = train_label.astype(float)
    train_label_tensor = torch.tensor(train_label, dtype=torch.float32).to(device)
    #train_label = train_label.float()xiugai
    mean_train_label = torch.mean(train_label_tensor)
    SST = ((train_label_tensor - mean_train_label) ** 2).sum()
   # SST = ((train_label - torch.mean(train_label)) ** 2).sum()

    SSR = ((output - train_label) ** 2).sum().cpu().detach().numpy()
    #SSR = SSR.detach().cpu().numpy()
    r2 = 1 - SSR / SST
    MAE = torch.mean(torch.abs(train_label - output))
    loss = criterion(output, train_label)
    # loss = sum(loss_num)/len(loss_num)
    loss.backward()
    optimizer.step()
    loss_num = torch.sqrt(loss)
    # r2 = torch.tensor(r2)
    return loss_num, r2, MAE
 batch_size = 256
        num_batches = len(train_data) // batch_size
    # test_num_batches = len(test_data) // batch_size\
        for epoch in range(num_epochs):
            train_loss_sum = np.zeros(1)
            tr_mae_sum = np.zeros(1)
            tr_r2_sum = np.zeros(1)
            for i in range(num_batches):
                start_idx = i * batch_size
                end_idx = (i + 1) * batch_size
                batch_data = train_data[start_idx:end_idx].unsqueeze(1)
                batch_label = train_label[start_idx:end_idx]
                # train_loss.append(train(model, optimizer, criterion, batch_data, batch_label))
                loss, r2, MAE = train(model, optimizer, criterion, batch_data, batch_label)
                train_loss = loss.cpu().detach().numpy()
                train_r2 = r2.cpu().detach().numpy()
                train_MAE = MAE.cpu().detach().numpy()
                # train_loss.append(train_loss)
                train_loss_sum = np.array(train_loss_sum) + train_loss
                tr_r2_sum = np.array(tr_r2_sum) + train_r2
                tr_mae_sum = np.array(tr_mae_sum) + train_MAE
            train_r2_mean = tr_r2_sum / num_batches
            train_loss_mean = train_loss_sum / num_batches
            train_mae_mean = tr_mae_sum / num_batches
            with torch.no_grad():
                val_outputs = model(vali_data.unsqueeze(1))
                val_output = val_outputs.squeeze()
                vali_label = vali_label.float()
                val_loss = criterion(val_output, vali_label)
                val_loss = torch.sqrt(val_loss)
                SST = ((vali_label - torch.mean(vali_label)) ** 2).sum()
                SSR = ((val_output - vali_label) ** 2).sum()
                val_r2 = 1 - SSR / SST
                val_mae = torch.mean(torch.abs(vali_label - val_output))
            print(f'Epoch [{epoch + 1}/{num_epochs}], Train Loss: {train_loss_mean.item()}, Train r2: {train_r2_mean.item()}, Train mae: {train_mae_mean.item()}, Val Loss: {val_loss.item()}, Val r2: {val_r2.item()}, Val mae: {val_mae.item()}')

  • 写回答

2条回答 默认 最新

  • 码农阿豪 Java领域优质创作者 2024-03-21 16:08
    关注
    让阿豪来帮你解答,本回答参考chatgpt3.5编写提供,如果还有疑问可以评论或留言
    问题:代码报错为什么? 错误原因:报错信息提示是"Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.",即不能在需要梯度的Tensor上调用numpy()函数,需要使用detach()函数将其分离成不需要梯度的Tensor。 解决方法:将代码中所有需要调用numpy()函数的Tensor使用detach()函数分离后再调用numpy()函数。 修改后的train()函数的代码如下: def train(model, optimizer, criterion, train_data, train_label):
    train_label, train_data = train_label.to(device), train_data.to(device)
    model.train()
    optimizer.zero_grad()
    output = model(train_data)
    output = output.squeeze()
    train_label = train_label.astype(float)
    train_label_tensor = torch.tensor(train_label, dtype=torch.float32).to(device)
    mean_train_label = torch.mean(train_label_tensor)
    SST = ((train_label_tensor - mean_train_label) ** 2).sum()
    SSR = ((output - train_label) ** 2).sum().detach().cpu().numpy()
    r2 = 1 - SSR / SST
    MAE = torch.mean(torch.abs(train_label - output))
    loss = criterion(output, train_label)
    loss.backward()
    optimizer.step()
    loss_num = torch.sqrt(loss).detach().cpu().numpy()
    return loss_num, r2, MAE
    

    修改后的代码中使用了detach()函数将SSR和loss_num分离,不再需要梯度,可以使用numpy()函数。

    评论

报告相同问题?

问题事件

  • 创建了问题 3月21日

悬赏问题

  • ¥15 真我手机蓝牙传输进度消息被关闭了,怎么打开?(关键词-消息通知)
  • ¥15 下图接收小电路,谁知道原理
  • ¥15 装 pytorch 的时候出了好多问题,遇到这种情况怎么处理?
  • ¥20 IOS游览器某宝手机网页版自动立即购买JavaScript脚本
  • ¥15 手机接入宽带网线,如何释放宽带全部速度
  • ¥30 关于#r语言#的问题:如何对R语言中mfgarch包中构建的garch-midas模型进行样本内长期波动率预测和样本外长期波动率预测
  • ¥15 ETLCloud 处理json多层级问题
  • ¥15 matlab中使用gurobi时报错
  • ¥15 这个主板怎么能扩出一两个sata口
  • ¥15 不是,这到底错哪儿了😭