tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__call__()

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__call__(inputs, state, scope=None) Run one step of LSTM. Args: inputs: input Tensor, 2D, batch x num_units. state: if state_is_tuple is False, this must be a state Tensor, 2-D, batch x state_size. If state_is_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c_state and m_state. scope: VariableScope for the created subgraph; defaults to "LSTMCell". Returns: A tuple containing: - A 2-D, [batch x output_dim], Ten

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.zero_state()

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args: batch_size: int, float, or unit Tensor representing the batch size. dtype: the data type to use for the state. Returns: If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tens

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.state_size

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.state_size

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.output_size

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.output_size

tf.contrib.rnn.CoupledInputForgetGateLSTMCell

class tf.contrib.rnn.CoupledInputForgetGateLSTMCell Long short-term memory unit (LSTM) recurrent network cell. The default non-peephole implementation is based on: http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf S. Hochreiter and J. Schmidhuber. "Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997. The peephole implementation is based on: https://research.google.com/pubs/archive/43905.pdf Hasim Sak, Andrew Senior, and Francoise Beaufays. "Long short-term memory recurren

tf.contrib.rnn.AttentionCellWrapper.__init__()

tf.contrib.rnn.AttentionCellWrapper.__init__(cell, attn_length, attn_size=None, attn_vec_size=None, input_size=None, state_is_tuple=False) Create a cell with attention. Args: cell: an RNNCell, an attention is added to it. attn_length: integer, the size of an attention window. attn_size: integer, the size of an attention vector. Equal to cell.output_size by default. attn_vec_size: integer, the number of convolutional features calculated on attention state and a size of the hidden layer buil

tf.contrib.rnn.AttentionCellWrapper.__call__()

tf.contrib.rnn.AttentionCellWrapper.__call__(inputs, state, scope=None) Long short-term memory cell with attention (LSTMA).

tf.contrib.rnn.AttentionCellWrapper.zero_state()

tf.contrib.rnn.AttentionCellWrapper.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args: batch_size: int, float, or unit Tensor representing the batch size. dtype: the data type to use for the state. Returns: If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with t

tf.contrib.rnn.AttentionCellWrapper.state_size

tf.contrib.rnn.AttentionCellWrapper.state_size

tf.contrib.rnn.AttentionCellWrapper.output_size

tf.contrib.rnn.AttentionCellWrapper.output_size