tf.contrib.distributions.Chi2WithAbsDf.entropy()

tf.contrib.distributions.Chi2WithAbsDf.entropy(name='entropy') Shanon entropy in nats. Additional documentation from Gamma: This is defined to be entropy = alpha - log(beta) + log(Gamma(alpha)) + (1-alpha)digamma(alpha) where digamma(alpha) is the digamma function.

tf.contrib.distributions.Uniform.get_batch_shape()

tf.contrib.distributions.Uniform.get_batch_shape() Shape of a single sample from a single event index as a TensorShape. Same meaning as batch_shape. May be only partially defined. Returns: batch_shape: TensorShape, possibly unknown.

tf.contrib.distributions.Categorical.variance()

tf.contrib.distributions.Categorical.variance(name='variance') Variance.

tf.VarLenFeature.__new__()

tf.VarLenFeature.__new__(_cls, dtype) Create new instance of VarLenFeature(dtype,)

tf.contrib.rnn.TimeFreqLSTMCell.__init__()

tf.contrib.rnn.TimeFreqLSTMCell.__init__(num_units, use_peepholes=False, cell_clip=None, initializer=None, num_unit_shards=1, forget_bias=1.0, feature_size=None, frequency_skip=None) Initialize the parameters for an LSTM cell. Args: num_units: int, The number of units in the LSTM cell use_peepholes: bool, set True to enable diagonal/peephole connections. cell_clip: (optional) A float value, if provided the cell state is clipped by this value prior to the cell output activation. initializer

tf.argmax()

tf.argmax(input, dimension, name=None) Returns the index with the largest value across dimensions of a tensor. Args: input: A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. dimension: A Tensor. Must be one of the following types: int32, int64. int32, 0 <= dimension < rank(input). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0. nam

tf.contrib.learn.Estimator.partial_fit()

tf.contrib.learn.Estimator.partial_fit(x=None, y=None, input_fn=None, steps=1, batch_size=None, monitors=None) Incremental fit on a batch of samples. This method is expected to be called several times consecutively on different or the same chunks of the dataset. This either can implement iterative training or out-of-core/online training. This is especially useful when the whole dataset is too big to fit in memory at the same time. Or when model is taking long time to converge, and you want to

tf.contrib.rnn.LayerNormBasicLSTMCell.zero_state()

tf.contrib.rnn.LayerNormBasicLSTMCell.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args: batch_size: int, float, or unit Tensor representing the batch size. dtype: the data type to use for the state. Returns: If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with

tf.SparseTensor.from_value()

tf.SparseTensor.from_value(cls, sparse_tensor_value)

tf.contrib.distributions.InverseGammaWithSoftplusAlphaBeta.get_batch_shape()

tf.contrib.distributions.InverseGammaWithSoftplusAlphaBeta.get_batch_shape() Shape of a single sample from a single event index as a TensorShape. Same meaning as batch_shape. May be only partially defined. Returns: batch_shape: TensorShape, possibly unknown.