tf.contrib.distributions.MultivariateNormalDiagWithSoftplusStDev.log_survival_function()

tf.contrib.distributions.MultivariateNormalDiagWithSoftplusStDev.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. A

tf.assert_negative()

tf.assert_negative(x, data=None, summarize=None, message=None, name=None) Assert the condition x < 0 holds element-wise. Example of adding a dependency to an operation: with tf.control_dependencies([tf.assert_negative(x)]): output = tf.reduce_sum(x) Example of adding dependency to the tensor being checked: x = tf.with_dependencies([tf.assert_negative(x)], x) Negative means, for every element x[i] of x, we have x[i] < 0. If x is empty this is trivially satisfied. Args: x: Numeric Ten

tf.contrib.graph_editor.copy()

tf.contrib.graph_editor.copy(sgv, dst_graph=None, dst_scope='', src_scope='', reuse_dst_scope=False) Copy a subgraph. Args: sgv: the source subgraph-view. This argument is converted to a subgraph using the same rules than the function subgraph.make_view. dst_graph: the destination graph. dst_scope: the destination scope. src_scope: the source scope. reuse_dst_scope: if True the dst_scope is re-used if it already exists. Otherwise, the scope is given a unique name based on the one given by

tf.pow()

tf.pow(x, y, name=None) Computes the power of one value to another. Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example: # tensor 'x' is [[2, 2], [3, 3]] # tensor 'y' is [[8, 16], [2, 3]] tf.pow(x, y) ==> [[256, 65536], [9, 27]] Args: x: A Tensor of type float32, float64, int32, int64, complex64, or complex128. y: A Tensor of type float32, float64, int32, int64, complex64, or complex128. name: A name for the operation (opti

tf.contrib.distributions.BernoulliWithSigmoidP.prob()

tf.contrib.distributions.BernoulliWithSigmoidP.prob(value, name='prob') Probability density/mass function (depending on is_continuous). Args: value: float or double Tensor. name: The name to give this op. Returns: prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.matrix_band_part()

tf.matrix_band_part(input, num_lower, num_upper, name=None) Copy a tensor setting everything outside a central band in each innermost matrix to zero. The band part is computed as follows: Assume input has k dimensions [I, J, K, ..., M, N], then the output is a tensor with the same shape where band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]. The indicator function 'in_band(m, n)is one if(num_lower < 0 || (m-n) <= num_lower)) && (num_upper < 0 || (n-m) <=

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__call__()

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__call__(inputs, state, scope=None) Run one step of LSTM. Args: inputs: input Tensor, 2D, batch x num_units. state: if state_is_tuple is False, this must be a state Tensor, 2-D, batch x state_size. If state_is_tuple is True, this must be a tuple of state Tensors, both 2-D, with column sizes c_state and m_state. scope: VariableScope for the created subgraph; defaults to "LSTMCell". Returns: A tuple containing: - A 2-D, [batch x output_dim], Ten

tf.contrib.distributions.DirichletMultinomial.sample()

tf.contrib.distributions.DirichletMultinomial.sample(sample_shape=(), seed=None, name='sample') Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample. Args: sample_shape: 0D or 1D int32 Tensor. Shape of the generated samples. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with prepended dimensions sample_shape.

tf.contrib.distributions.WishartFull.event_shape()

tf.contrib.distributions.WishartFull.event_shape(name='event_shape') Shape of a single sample from a single batch as a 1-D int32 Tensor. Args: name: name to give to the op Returns: event_shape: Tensor.

tf.contrib.distributions.Normal.log_pmf()

tf.contrib.distributions.Normal.log_pmf(value, name='log_pmf') Log probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: log_pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.