class tf.contrib.rnn.LSTMBlockCell Basic LSTM recurrent network cell. The implementation is based
tf.contrib.rnn.CoupledInputForgetGateLSTMCell.state_size
tf.contrib.rnn.GRUBlockCell.__init__(cell_size) Initialize the Block GRU cell. Args:
tf.contrib.rnn.LayerNormBasicLSTMCell.__call__(inputs, state, scope=None) LSTM cell with layer normalization and recurrent dropout
class tf.contrib.rnn.LayerNormBasicLSTMCell LSTM unit with layer normalization and recurrent dropout. This
tf.contrib.rnn.AttentionCellWrapper.zero_state(batch_size, dtype) Return zero-filled state tensor(s).
tf.contrib.rnn.TimeFreqLSTMCell.state_size
tf.contrib.rnn.TimeFreqLSTMCell.__init__(num_units, use_peepholes=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1.0, feature_size=None, fr
tf.contrib.rnn.LSTMBlockCell.output_size
tf.contrib.rnn.TimeFreqLSTMCell.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args:
Page 3 of 5