class tf.contrib.rnn.GRUBlockCell Block GRU cell implementation. The implementation is based on:
tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__init__(num_units, use_peepholes=False, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1
tf.contrib.rnn.LSTMBlockCell.__init__(num_units, forget_bias=1.0, use_peephole=False) Initialize the basic LSTM cell.
tf.contrib.rnn.AttentionCellWrapper.__init__(cell, attn_length, attn_size=None, attn_vec_size=None, input_size=None, state_is_tuple=False)
tf.contrib.rnn.GRUBlockCell.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args:
tf.contrib.rnn.GridLSTMCell.__init__(num_units, use_peepholes=False, share_time_frequency_weights=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1
tf.contrib.rnn.AttentionCellWrapper.state_size
tf.contrib.rnn.LayerNormBasicLSTMCell.__call__(inputs, state, scope=None) LSTM cell with layer normalization and recurrent dropout
tf.contrib.rnn.GRUBlockCell.__init__(cell_size) Initialize the Block GRU cell. Args:
tf.contrib.rnn.CoupledInputForgetGateLSTMCell.state_size
Page 2 of 5