lstm用于分类,损失函数为交叉熵,优化器为adam lr=0.01,std归一化;loss曲线是一条上升曲线,然而准确率也是上升的。
数据shape为【batch_size,1,sequence_length】= 【256,1,2000】,经过处理变为【256, 1, 15】
nn.LSTM参数设置inputsize = 15, hidden_size=16, num_layers=2, batch_first=True, bidirectional=True
具体训练为:
loss_all = 0
train_acc = 0
for step, (x, y) in enumerate(train_loader): # gives batch data
output = lstm(x)
loss = loss_func(output, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_pred = torch.argmax(output, dim=1)
train_acc = (train_pred == y).float()
train_acc = torch.mean(train_acc)
total_train_acc = total_train_acc + train_acc
loss_all = loss_all + loss
total_train_loss = loss_all / len(train_loader)
total_train_acc = total_train_acc / len(train_loader)
total_loss.append(total_train_loss.item())
total_acc.append(total_train_acc.item())
具体曲线如下:
为什么会出现这种情况?计算loss与准确率 len(train_loader)这里z作为除数是否正确?