tf.contrib.rnn.LSTMBlockCell.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args:
tf.contrib.rnn.GRUBlockCell.output_size
class tf.contrib.rnn.LSTMBlockCell Basic LSTM recurrent network cell. The implementation is based
tf.contrib.rnn.GridLSTMCell.__call__(inputs, state, scope=None) Run one step of LSTM. Args:
tf.contrib.rnn.GRUBlockCell.state_size
class tf.contrib.rnn.LayerNormBasicLSTMCell LSTM unit with layer normalization and recurrent dropout. This
tf.contrib.rnn.LSTMBlockCell.state_size
tf.contrib.rnn.AttentionCellWrapper.zero_state(batch_size, dtype) Return zero-filled state tensor(s).
tf.contrib.rnn.AttentionCellWrapper.__call__(inputs, state, scope=None) Long short-term memory cell with attention (LSTMA).
tf.contrib.rnn.TimeFreqLSTMCell.state_size
Page 3 of 5