tf.contrib.metrics.streaming_mean_squared_error()

tf.contrib.metrics.streaming_mean_squared_error(predictions, labels, weights=None, metrics_collections=None, updates_collections=None, name=None) Computes the mean squared error between the labels and predictions. The streaming_mean_squared_error function creates two local variables, total and count that are used to compute the mean squared error. This average is weighted by weights, and it is ultimately returned as mean_squared_error: an idempotent operation that simply divides total by count

tf.contrib.distributions.WishartCholesky.param_static_shapes()

tf.contrib.distributions.WishartCholesky.param_static_shapes(cls, sample_shape) param_shapes with static (i.e. TensorShape) shapes. Args: sample_shape: TensorShape or python list/tuple. Desired shape of a call to sample(). Returns: dict of parameter name to TensorShape. Raises: ValueError: if sample_shape is a TensorShape and is not fully defined.

tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.entropy()

tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.entropy(name='entropy')

tensorflow::Tensor::SummarizeValue()

string tensorflow::Tensor::SummarizeValue(int64 max_entries) const Render the first max_entries values in *this into a string.

p]

KL[q || p] If log_p(z) = Log[p(z)] for distribution p, this Op approximates the negative Kullback-Leibler divergence. elbo_ratio(log_p, q, n=100) = -1 * KL[q || p], KL[q || p] = E[ Log[q(Z)] - Log[p(Z)] ] Note that if p is a Distribution, then distributions.kl(q, p) may be defined and available as an exact result.

tf.atan()

tf.atan(x, name=None) Computes atan of x element-wise. Args: x: A Tensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128. name: A name for the operation (optional). Returns: A Tensor. Has the same type as x.

tf.train.range_input_producer()

tf.train.range_input_producer(limit, num_epochs=None, shuffle=True, seed=None, capacity=32, shared_name=None, name=None) Produces the integers from 0 to limit-1 in a queue. Args: limit: An int32 scalar tensor. num_epochs: An integer (optional). If specified, range_input_producer produces each integer num_epochs times before generating an OutOfRange error. If not specified, range_input_producer can cycle through the integers an unlimited number of times. shuffle: Boolean. If true, the intege

tf.contrib.metrics.streaming_sensitivity_at_specificity()

tf.contrib.metrics.streaming_sensitivity_at_specificity(predictions, labels, specificity, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, name=None) Computes the the specificity at a given sensitivity. The streaming_sensitivity_at_specificity function creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the sensitivity at the given specificity value. The threshold for the given specifici

tf.contrib.distributions.Chi2WithAbsDf.log_cdf()

tf.contrib.distributions.Chi2WithAbsDf.log_cdf(value, name='log_cdf') Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ] Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1. Args: value: float or double Tensor. name: The name to give this op. Returns: logcdf: a Tensor of shape sample_shape(x) +

tf.contrib.metrics.set_intersection()

tf.contrib.metrics.set_intersection(a, b, validate_indices=True) Compute set intersection of elements in last dimension of a and b. All but the last dimension of a and b must match. Args: a: Tensor or SparseTensor of the same type as b. If sparse, indices must be sorted in row-major order. b: Tensor or SparseTensor of the same type as a. Must be SparseTensor if a is SparseTensor. If sparse, indices must be sorted in row-major order. validate_indices: Whether to validate the order and range