tf.nn.rnn_cell.LSTMCell.__call__()

tf.nn.rnn_cell.LSTMCell.__call__(inputs, state, scope=None) Run one step of LSTM. Args: inputs: input Tensor, 2D, batch x num_units. state: if state_is_tuple is False, this must be a state Tensor, 2-D, batch x state_size. If state_is_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c_state and m_state. scope: VariableScope for the created subgraph; defaults to "LSTMCell". Returns: A tuple containing: - A 2-D, [batch x output_dim], Tensor representing the o

tf.nn.rnn_cell.LSTMCell.zero_state()

tf.nn.rnn_cell.LSTMCell.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args: batch_size: int, float, or unit Tensor representing the batch size. dtype: the data type to use for the state. Returns: If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [b

tf.nn.rnn_cell.LSTMCell.state_size

tf.nn.rnn_cell.LSTMCell.state_size

tf.nn.rnn_cell.LSTMCell.output_size

tf.nn.rnn_cell.LSTMCell.output_size

tf.nn.rnn_cell.LSTMCell

class tf.nn.rnn_cell.LSTMCell Long short-term memory unit (LSTM) recurrent network cell. The default non-peephole implementation is based on: http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf S. Hochreiter and J. Schmidhuber. "Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997. The peephole implementation is based on: https://research.google.com/pubs/archive/43905.pdf Hasim Sak, Andrew Senior, and Francoise Beaufays. "Long short-term memory recurrent neural network archi

tf.nn.rnn_cell.InputProjectionWrapper.__init__()

tf.nn.rnn_cell.InputProjectionWrapper.__init__(cell, num_proj, input_size=None) Create a cell with input projection. Args: cell: an RNNCell, a projection of inputs is added before it. num_proj: Python integer. The dimension to project to. input_size: Deprecated and unused. Raises: TypeError: if cell is not an RNNCell.

tf.nn.rnn_cell.InputProjectionWrapper.__call__()

tf.nn.rnn_cell.InputProjectionWrapper.__call__(inputs, state, scope=None) Run the input projection and then the cell.

tf.nn.rnn_cell.InputProjectionWrapper.zero_state()

tf.nn.rnn_cell.InputProjectionWrapper.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args: batch_size: int, float, or unit Tensor representing the batch size. dtype: the data type to use for the state. Returns: If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with

tf.nn.rnn_cell.InputProjectionWrapper.state_size

tf.nn.rnn_cell.InputProjectionWrapper.state_size

tf.nn.rnn_cell.InputProjectionWrapper.output_size

tf.nn.rnn_cell.InputProjectionWrapper.output_size