有两个问题:
1.RNN的dropout在一个序列的每个时间step是固定的还是变化的?
2.embedding将onehot向量转化为稠密向量,这个过程为啥要+1如下所示
embedding = tf.get_variable("embedding", [len(words)+1, rnn_size])
inputs = tf.nn.embedding_lookup(embedding, input_data)
vocab_len = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement)
emb_dim = word_to_vec_map["cucumber"].shape[0] # define dimensionality of your GloVe word vectors (= 50)