Navigation

    Gpushare.com

    • Register
    • Login
    • Search
    • Popular
    • Categories
    • Recent
    • Tags

    Salute!TextRNN的PyTorch实现

    语音识别与语义处理领域
    1
    1
    44
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • 155****7220
      155****7220 last edited by

      B站视频讲解

      本文介绍一下如何使用PyTorch复现TextRNN,实现预测一句话的下一个词

      参考这篇论文Finding Structure in Time(1990),如果你对RNN有一定的了解,实际上不用看,仔细看我代码如何实现即可。如果你对RNN不太了解,请仔细阅读我这篇文章RNN Layer,结合PyTorch讲的很详细

      现在问题的背景是,我有n句话,每句话都由且仅由3个单词组成。我要做的是,将每句话的前两个单词作为输入,最后一词作为输出,训练一个RNN模型

      导库

      '''
        code by Tae Hwan Jung(Jeff Jung) @graykode, modify by wmathor
      '''
      import torch
      import numpy as np
      import torch.nn as nn
      import torch.optim as optim
      import torch.utils.data as Data
      
      dtype = torch.FloatTensor
      

      准备数据

      sentences = [ "i like dog", "i love coffee", "i hate milk"]
      
      word_list = " ".join(sentences).split()
      vocab = list(set(word_list))
      word2idx = {w: i for i, w in enumerate(vocab)}
      idx2word = {i: w for i, w in enumerate(vocab)}
      n_class = len(vocab)
      

      预处理数据,构建Dataset,定义DataLoader,输入数据用one-hot编码

      # TextRNN Parameter
      batch_size = 2
      n_step = 2 # number of cells(= number of Step)
      n_hidden = 5 # number of hidden units in one cell
      
      def make_data(sentences):
          input_batch = []
          target_batch = []
      
          for sen in sentences:
              word = sen.split()
              input = [word2idx[n] for n in word[:-1]]
              target = word2idx[word[-1]]
      
              input_batch.append(np.eye(n_class)[input])
              target_batch.append(target)
      
          return input_batch, target_batch
      
      input_batch, target_batch = make_data(sentences)
      input_batch, target_batch = torch.Tensor(input_batch), torch.LongTensor(target_batch)
      dataset = Data.TensorDataset(input_batch, target_batch)
      loader = Data.DataLoader(dataset, batch_size, True)
      

      以上的代码我想大家应该都没有问题,接下来就是定义网络架构

      class TextRNN(nn.Module):
          def __init__(self):
              super(TextRNN, self).__init__()
              self.rnn = nn.RNN(input_size=n_class, hidden_size=n_hidden)
              # fc
              self.fc = nn.Linear(n_hidden, n_class)
      
          def forward(self, hidden, X):
              # X: [batch_size, n_step, n_class]
              X = X.transpose(0, 1) # X : [n_step, batch_size, n_class]
              out, hidden = self.rnn(X, hidden)
              # out : [n_step, batch_size, num_directions(=1) * n_hidden]
              # hidden : [num_layers(=1) * num_directions(=1), batch_size, n_hidden]
              out = out[-1] # [batch_size, num_directions(=1) * n_hidden] ⭐
              model = self.fc(out)
              return model
      
      model = TextRNN()
      criterion = nn.CrossEntropyLoss()
      optimizer = optim.Adam(model.parameters(), lr=0.001)
      

      以上代码每一步都值得说一下,首先是nn.RNN(input_size, hidden_size)的两个参数,input_size表示每个词的编码维度,由于我是用的one-hot编码,而不是WordEmbedding,所以input_size就等于词库的大小len(vocab),即n_class。然后是hidden_size,这个参数没有固定的要求,你想将输入数据的维度转为多少维,就设定多少

      对于通常的神经网络来说,输入数据的第一个维度一般都是batch_size。而PyTorch中nn.RNN()要求将batch_size放在第二个维度上,所以需要使用x.transpose(0, 1)将输入数据的第一个维度和第二个维度互换

      然后是rnn的输出,rnn会返回两个结果,即上面代码的out和hidden,关于这两个变量的区别,我在之前的博客也提到过了,如果不清楚,可以看我上面提到的RNN Layer这篇博客。这里简单说就是,out指的是下图的红框框起来的所有值;hidden指的是下图蓝框框起来的所有值。我们需要的是最后时刻的最后一层输出,即Y3Y_3Y3​的值,所以使用out=out[-1]将其获取

      剩下的部分就比较简单了,训练测试即可

      # Training
      for epoch in range(5000):
          for x, y in loader:
            # hidden : [num_layers * num_directions, batch, hidden_size]
            hidden = torch.zeros(1, x.shape[0], n_hidden)
            # x : [batch_size, n_step, n_class]
            pred = model(hidden, x)
      
            # pred : [batch_size, n_class], y : [batch_size] (LongTensor, not one-hot)
            loss = criterion(pred, y)
            if (epoch + 1) % 1000 == 0:
                print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss))
      
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
          
      input = [sen.split()[:2] for sen in sentences]
      # Predict
      hidden = torch.zeros(1, len(input), n_hidden)
      predict = model(hidden, input_batch).data.max(1, keepdim=True)[1]
      print([sen.split()[:2] for sen in sentences], '->', [idx2word[n.item()] for n in predict.squeeze()])
      

      完整代码如下

      '''
        code by Tae Hwan Jung(Jeff Jung) @graykode, modify by wmathor
      '''
      import torch
      import numpy as np
      import torch.nn as nn
      import torch.optim as optim
      import torch.utils.data as Data
      
      dtype = torch.FloatTensor
      
      sentences = [ "i like dog", "i love coffee", "i hate milk"]
      
      word_list = " ".join(sentences).split()
      vocab = list(set(word_list))
      word2idx = {w: i for i, w in enumerate(vocab)}
      idx2word = {i: w for i, w in enumerate(vocab)}
      n_class = len(vocab)
      
      # TextRNN Parameter
      batch_size = 2
      n_step = 2 # number of cells(= number of Step)
      n_hidden = 5 # number of hidden units in one cell
      
      def make_data(sentences):
          input_batch = []
          target_batch = []
      
          for sen in sentences:
              word = sen.split()
              input = [word2idx[n] for n in word[:-1]]
              target = word2idx[word[-1]]
      
              input_batch.append(np.eye(n_class)[input])
              target_batch.append(target)
      
          return input_batch, target_batch
      
      input_batch, target_batch = make_data(sentences)
      input_batch, target_batch = torch.Tensor(input_batch), torch.LongTensor(target_batch)
      dataset = Data.TensorDataset(input_batch, target_batch)
      loader = Data.DataLoader(dataset, batch_size, True)
      
      class TextRNN(nn.Module):
          def __init__(self):
              super(TextRNN, self).__init__()
              self.rnn = nn.RNN(input_size=n_class, hidden_size=n_hidden)
              # fc
              self.fc = nn.Linear(n_hidden, n_class)
      
          def forward(self, hidden, X):
              # X: [batch_size, n_step, n_class]
              X = X.transpose(0, 1) # X : [n_step, batch_size, n_class]
              out, hidden = self.rnn(X, hidden)
              # out : [n_step, batch_size, num_directions(=1) * n_hidden]
              # hidden : [num_layers(=1) * num_directions(=1), batch_size, n_hidden]
              out = out[-1] # [batch_size, num_directions(=1) * n_hidden] ⭐
              model = self.fc(out)
              return model
      
      model = TextRNN()
      criterion = nn.CrossEntropyLoss()
      optimizer = optim.Adam(model.parameters(), lr=0.001)
      
      # Training
      for epoch in range(5000):
          for x, y in loader:
            # hidden : [num_layers * num_directions, batch, hidden_size]
            hidden = torch.zeros(1, x.shape[0], n_hidden)
            # x : [batch_size, n_step, n_class]
            pred = model(hidden, x)
      
            # pred : [batch_size, n_class], y : [batch_size] (LongTensor, not one-hot)
            loss = criterion(pred, y)
            if (epoch + 1) % 1000 == 0:
                print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss))
      
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
        
      input = [sen.split()[:2] for sen in sentences]
      # Predict
      hidden = torch.zeros(1, len(input), n_hidden)
      predict = model(hidden, input_batch).data.max(1, keepdim=True)[1]
      print([sen.split()[:2] for sen in sentences], '->', [idx2word[n.item()] for n in predict.squeeze()])
      
      1 Reply Last reply Reply Quote 1
      • First post
        Last post