lbt162020 2022-02-18 13:11 采纳率: 68.8%
浏览 907
已结题

输入和隐藏层不在同一设备上怎么处理!Input and hidden tensors are not at the same device

RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu
我只有一张显卡,不存在多GPU

class BiLSTM_Attention(nn.Module):
    def __init__(self, vocab_size, embed_size, num_hiddens,
                 num_layers, **kwargs):
        super(BiLSTM_Attention, self).__init__(**kwargs)
        self.embedding = nn.Embedding(vocab_size, embed_size)
        self.lstm = nn.LSTM(embed_size, num_hiddens, bidirectional=True)
        self.out = nn.Linear(num_hiddens * 2, 2)

    # lstm_output : [batch_size, n_step, n_hidden * num_directions(=2)], F matrix
    def attention_net(self, lstm_output, final_state):
        hidden = final_state.view(-1, num_hiddens * 2,
                                  1)  # hidden : [batch_size, n_hidden * num_directions(=2), 1(=n_layer)]
        attn_weights = torch.bmm(lstm_output, hidden).squeeze(2)  # attn_weights : [batch_size, n_step]
        soft_attn_weights = F.softmax(attn_weights, 1)
        # [batch_size, n_hidden * num_directions(=2), n_step] * [batch_size, n_step, 1] = [batch_size, n_hidden * num_directions(=2), 1]
        context = torch.bmm(lstm_output.transpose(1, 2), soft_attn_weights.unsqueeze(2)).squeeze(2)
        return context, soft_attn_weights.data.numpy()  # context : [batch_size, n_hidden * num_directions(=2)]

    def forward(self, X):
        input = self.embedding(X)  # input : [batch_size, len_seq, embedding_dim]
        input = input.permute(1, 0, 2)  # input : [len_seq, batch_size, embedding_dim]
        hidden_state = torch.zeros(1 * 2, len(X),
                                   num_hiddens)  # [num_layers(=1) * num_directions(=2), batch_size, n_hidden]
        cell_state = torch.zeros(1 * 2, len(X),
                                 num_hiddens)  # [num_layers(=1) * num_directions(=2), batch_size, n_hidden]

        # final_hidden_state, final_cell_state : [num_layers(=1) * num_directions(=2), batch_size, n_hidden]
        output, (final_hidden_state, final_cell_state) = self.lstm(input, (hidden_state, cell_state))
        output = output.permute(1, 0, 2)  # output : [batch_size, len_seq, n_hidden]
        attn_output, attention = self.attention_net(output, final_hidden_state)
        return self.out(attn_output), attention  # model : [batch_size, num_classes], attention : [batch_size, n_step]


  • 写回答

2条回答 默认 最新

  • ty94666 2022-02-18 13:34
    关注

    def init_hidden(self):
    return (torch.randn(2, self.batch, self.hidden_dim // 2)).to(self.device)

    def init_hidden_lstm(self):
    return (torch.randn(2, self.batch, self.hidden_dim // 2).to(self.device),
    torch.randn(2, self.batch, self.hidden_dim // 2).to(self.device))

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

问题事件

  • 系统已结题 2月26日
  • 已采纳回答 2月18日
  • 修改了问题 2月18日
  • 修改了问题 2月18日
  • 展开全部

悬赏问题

  • ¥15 海康威视如何实现客户端软件对设备语音请求的处理。
  • ¥15 支付宝h5参数如何实现跳转
  • ¥15 MATLAB代码补全插值
  • ¥15 Typegoose 中如何使用 arrayFilters 筛选并更新深度嵌套的子文档数组信息
  • ¥15 前后端分离的学习疑问?
  • ¥15 stata实证代码答疑
  • ¥50 husky+jaco2实现在gazebo与rviz中联合仿真
  • ¥15 dpabi预处理报错:Error using y_ExtractROISignal (line 251)
  • ¥15 在虚拟机中配置flume,无法将slave1节点的文件采集到master节点中
  • ¥15 husky+kinova jaco2 仿真