tf.contrib.rnn.GRUBlockCell.zero_state()

tf.contrib.rnn.GRUBlockCell.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args:

2016-10-14 13:07:23
tf.contrib.rnn.LayerNormBasicLSTMCell.state_size

tf.contrib.rnn.LayerNormBasicLSTMCell.state_size

2016-10-14 13:07:24
tf.contrib.rnn.GridLSTMCell.

tf.contrib.rnn.GridLSTMCell.__init__(num_units, use_peepholes=False, share_time_frequency_weights=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1

2016-10-14 13:07:23
tf.contrib.rnn.GRUBlockCell.

tf.contrib.rnn.GRUBlockCell.__call__(x, h_prev, scope=None) GRU cell.

2016-10-14 13:07:23
tf.contrib.rnn.LSTMBlockCell.

tf.contrib.rnn.LSTMBlockCell.__init__(num_units, forget_bias=1.0, use_peephole=False) Initialize the basic LSTM cell.

2016-10-14 13:07:25
tf.contrib.rnn.GRUBlockCell

class tf.contrib.rnn.GRUBlockCell Block GRU cell implementation. The implementation is based on:

2016-10-14 13:07:23
tf.contrib.rnn.CoupledInputForgetGateLSTMCell.

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__init__(num_units, use_peepholes=False, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1

2016-10-14 13:07:22
tf.contrib.rnn.GRUBlockCell.output_size

tf.contrib.rnn.GRUBlockCell.output_size

2016-10-14 13:07:23
tf.contrib.rnn.LSTMBlockCell

class tf.contrib.rnn.LSTMBlockCell Basic LSTM recurrent network cell. The implementation is based

2016-10-14 13:07:25
tf.contrib.rnn.LSTMBlockCell.zero_state()

tf.contrib.rnn.LSTMBlockCell.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args:

2016-10-14 13:07:25