tf.contrib.metrics.streaming_mean_squared_error()

tf.contrib.metrics.streaming_mean_squared_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None) Computes the mean squared error between the labels and predictions. The streaming_mean_squared_error function creates two local variables, total and count that are used to compute the mean squared error. This average is weighted by weights, and it is ultimately returned as mean_squared_error: an idempotent operation that simply divides total by count

tf.assert_rank()

tf.assert_rank(x, rank, data=None, summarize=None, message=None, name=None) Assert x has rank equal to rank. Example of adding a dependency to an operation: with tf.control_dependencies([tf.assert_rank(x, 2)]): output = tf.reduce_sum(x) Example of adding dependency to the tensor being checked: x = tf.with_dependencies([tf.assert_rank(x, 2)], x) Args: x: Numeric Tensor. rank: Scalar integer Tensor. data: The tensors to print out if the condition is False. Defaults to error message and fi

tf.contrib.distributions.DirichletMultinomial.event_shape()

tf.contrib.distributions.DirichletMultinomial.event_shape(name='event_shape') Shape of a single sample from a single batch as a 1-D int32 Tensor. Args: name: name to give to the op Returns: event_shape: Tensor.

tf.contrib.distributions.InverseGamma.log_survival_function()

tf.contrib.distributions.InverseGamma.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. Args: value: float or doubl

tf.contrib.bayesflow.stochastic_tensor.MultivariateNormalCholeskyTensor.loss()

tf.contrib.bayesflow.stochastic_tensor.MultivariateNormalCholeskyTensor.loss(final_loss, name='Loss')

tf.contrib.distributions.Mixture.log_survival_function()

tf.contrib.distributions.Mixture.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. Args: value: float or double Ten

tf.contrib.bayesflow.stochastic_tensor.DirichletTensor.mean()

tf.contrib.bayesflow.stochastic_tensor.DirichletTensor.mean(name='mean')

tf.sparse_reorder()

tf.sparse_reorder(sp_input, name=None) Reorders a SparseTensor into the canonical, row-major ordering. Note that by convention, all sparse ops preserve the canonical ordering along increasing dimension number. The only time ordering can be violated is during manual manipulation of the indices and values to add entries. Reordering does not affect the shape of the SparseTensor. For example, if sp_input has shape [4, 5] and indices / values: [0, 3]: b [0, 1]: a [3, 1]: d [2, 0]: c then the outpu

tf.sparse_softmax()

tf.sparse_softmax(sp_input, name=None) Applies softmax to a batched N-D SparseTensor. The inputs represent an N-D SparseTensor with logical shape [..., B, C] (where N >= 2), and with indices sorted in the canonical lexicographic order. This op is equivalent to applying the normal tf.nn.softmax() to each innermost logical submatrix with shape [B, C], but with the catch that the implicitly zero elements do not participate. Specifically, the algorithm is equivalent to: (1) Applies tf.nn.softma

tf.contrib.distributions.DirichletMultinomial.prob()

tf.contrib.distributions.DirichletMultinomial.prob(value, name='prob') Probability density/mass function (depending on is_continuous). Additional documentation from DirichletMultinomial: For each batch of counts [n_1,...,n_k], P[counts] is the probability that after sampling n draws from this Dirichlet Multinomial distribution, the number of draws falling in class j is n_j. Note that different sequences of draws can result in the same counts, thus the probability includes a combinatorial coeff