tf.nn.rnn_cell.LSTMCell.__call__()

tf.nn.rnn_cell.LSTMCell.__call__(inputs, state, scope=None) Run one step of LSTM. Args: inputs: input Tensor, 2D, batch x num_units. state: if state_is_tuple is False, this must be a state Tensor, 2-D, batch x state_size. If state_is_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c_state and m_state. scope: VariableScope for the created subgraph; defaults to "LSTMCell". Returns: A tuple containing: - A 2-D, [batch x output_dim], Tensor representing the o

tf.nn.rnn_cell.LSTMCell.output_size

tf.nn.rnn_cell.LSTMCell.output_size

tf.nn.rnn_cell.LSTMCell.state_size

tf.nn.rnn_cell.LSTMCell.state_size

tf.nn.rnn_cell.LSTMCell

class tf.nn.rnn_cell.LSTMCell Long short-term memory unit (LSTM) recurrent network cell. The default non-peephole implementation is based on: http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf S. Hochreiter and J. Schmidhuber. "Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997. The peephole implementation is based on: https://research.google.com/pubs/archive/43905.pdf Hasim Sak, Andrew Senior, and Francoise Beaufays. "Long short-term memory recurrent neural network archi

tf.nn.rnn_cell.LSTMCell.__init__()

tf.nn.rnn_cell.LSTMCell.__init__(num_units, input_size=None, use_peepholes=False, cell_clip=None, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=True, activation=tanh) Initialize the parameters for an LSTM cell. Args: num_units: int, The number of units in the LSTM cell input_size: Deprecated and unused. use_peepholes: bool, set True to enable diagonal/peephole connections. cell_clip: (optional) A float value, if provi

tf.nn.rnn_cell.InputProjectionWrapper.output_size

tf.nn.rnn_cell.InputProjectionWrapper.output_size

tf.nn.rnn_cell.InputProjectionWrapper.__call__()

tf.nn.rnn_cell.InputProjectionWrapper.__call__(inputs, state, scope=None) Run the input projection and then the cell.

tf.nn.rnn_cell.InputProjectionWrapper

class tf.nn.rnn_cell.InputProjectionWrapper Operator adding an input projection to the given cell. Note: in many cases it may be more efficient to not use this wrapper, but instead concatenate the whole sequence of your inputs in time, do the projection on this batch-concatenated sequence, then split it.

tf.nn.rnn_cell.InputProjectionWrapper.__init__()

tf.nn.rnn_cell.InputProjectionWrapper.__init__(cell, num_proj, input_size=None) Create a cell with input projection. Args: cell: an RNNCell, a projection of inputs is added before it. num_proj: Python integer. The dimension to project to. input_size: Deprecated and unused. Raises: TypeError: if cell is not an RNNCell.

tf.nn.rnn_cell.InputProjectionWrapper.zero_state()

tf.nn.rnn_cell.InputProjectionWrapper.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args: batch_size: int, float, or unit Tensor representing the batch size. dtype: the data type to use for the state. Returns: If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with