tf.contrib.rnn.CoupledInputForgetGateLSTMCell.output_size
tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__call__(inputs, state, scope=None) Run one step of LSTM.
tf.contrib.rnn.TimeFreqLSTMCell.state_size
tf.contrib.rnn.TimeFreqLSTMCell.__init__(num_units, use_peepholes=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1.0, feature_size=None, fr
tf.contrib.rnn.AttentionCellWrapper.__call__(inputs, state, scope=None) Long short-term memory cell with attention (LSTMA).
tf.contrib.rnn.GRUBlockCell.state_size
tf.contrib.rnn.LayerNormBasicLSTMCell.output_size
tf.contrib.rnn.AttentionCellWrapper.output_size
tf.contrib.rnn.LayerNormBasicLSTMCell.zero_state(batch_size, dtype) Return zero-filled state tensor(s).
class tf.contrib.rnn.AttentionCellWrapper Basic attention cell wrapper. Implementation based on
Page 4 of 5