tf.contrib.rnn.TimeFreqLSTMCell.__call__()

tf.contrib.rnn.TimeFreqLSTMCell.__call__(inputs, state, scope=None) Run one step of LSTM. Args: inputs: input Tensor, 2D, batch x num_units. state: state Tensor, 2D, batch x state_size. scope: VariableScope for the created subgraph; defaults to "TimeFreqLSTMCell". Returns: A tuple containing: - A 2D, batch x output_dim, Tensor representing the output of the LSTM after reading "inputs" when previous state was "state". Here output_dim is num_units. - A 2D, batch x state_size, Tensor represe

tf.contrib.rnn.TimeFreqLSTMCell.__init__()

tf.contrib.rnn.TimeFreqLSTMCell.__init__(num_units, use_peepholes=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1.0, feature_size=None, frequency_skip=None) Initialize the parameters for an LSTM cell. Args: num_units: int, The number of units in the LSTM cell use_peepholes: bool, set True to enable diagonal/peephole connections. cell_clip: (optional) A float value, if provided the cell state is clipped by this value prior to the cell output activation. initializer

tf.contrib.rnn.TimeFreqLSTMCell.state_size

tf.contrib.rnn.TimeFreqLSTMCell.state_size

tf.contrib.rnn.TimeFreqLSTMCell.output_size

tf.contrib.rnn.TimeFreqLSTMCell.output_size

tf.contrib.rnn.LSTMBlockCell.zero_state()

tf.contrib.rnn.LSTMBlockCell.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args: batch_size: int, float, or unit Tensor representing the batch size. dtype: the data type to use for the state. Returns: If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shap

tf.contrib.rnn.LSTMBlockCell

class tf.contrib.rnn.LSTMBlockCell Basic LSTM recurrent network cell. The implementation is based on: http://arxiv.org/abs/1409.2329. We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training. Unlike BasicLSTMCell, this is a monolithic op and should be much faster. The weight and bias matrixes should be compatible as long as the variabel scope matches.

tf.contrib.rnn.LSTMBlockCell.__call__()

tf.contrib.rnn.LSTMBlockCell.__call__(x, states_prev, scope=None) Long short-term memory cell (LSTM).

tf.contrib.rnn.LSTMBlockCell.state_size

tf.contrib.rnn.LSTMBlockCell.state_size

tf.contrib.rnn.LSTMBlockCell.__init__()

tf.contrib.rnn.LSTMBlockCell.__init__(num_units, forget_bias=1.0, use_peephole=False) Initialize the basic LSTM cell. Args: num_units: int, The number of units in the LSTM cell. forget_bias: float, The bias added to forget gates (see above). use_peephole: Whether to use peephole connections or not.

tf.contrib.rnn.LSTMBlockCell.output_size

tf.contrib.rnn.LSTMBlockCell.output_size