一直在用现成的框架写东西,这几天有空心血来潮想自己写个简单的神经网络,结果不知道怎么回事backward到第二步的时候w1.grad和w2.grad都是None
求各位解答:
import torch
import numpy as np
N, D_in, H, D_out = 64, 1000, 100, 10
learning_rate = 1e-6
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
w1 = torch.randn(D_in, H)
w2 = torch.randn(H, D_out)
learning_rate = 1e-6
for it in range(500):
w1=w1.requires_grad_()
w2=w2.requires_grad_()
# Forward pass
y_pred = x.mm(w1).clamp(min=0).mm(w2)
# compute loss
loss = (y_pred - y).pow(2).sum() # computation graph
print(it, loss.item())
# Backward pass
loss.backward()
w1=(w1-learning_rate*w1.grad)
w2=(w2-learning_rate*w2.grad)