tf.contrib.distributions.Exponential.pmf()

tf.contrib.distributions.Exponential.pmf(value, name='pmf') Probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.contrib.distributions.Exponential.cdf()

tf.contrib.distributions.Exponential.cdf(value, name='cdf') Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x] Args: value: float or double Tensor. name: The name to give this op. Returns: cdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.__init__()

tf.contrib.bayesflow.stochastic_tensor.StochasticTensor.__init__(dist_cls, name=None, dist_value_type=None, loss_fn=score_function, **dist_args) Construct a StochasticTensor. StochasticTensor will instantiate a distribution from dist_cls and dist_args and its value method will return the same value each time it is called. What value is returned is controlled by the dist_value_type (defaults to SampleAndReshapeValue). Some distributions' sample functions are not differentiable (e.g. a sample fr

tf.contrib.distributions.MultivariateNormalDiagPlusVDVT.sample_n()

tf.contrib.distributions.MultivariateNormalDiagPlusVDVT.sample_n(n, seed=None, name='sample_n') Generate n samples. Args: n: Scalar Tensor of type int32 or int64, the number of observations to sample. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with a prepended dimension (n,). Raises: TypeError: if n is not an integer type.

tf.matrix_diag_part()

tf.matrix_diag_part(input, name=None) Returns the batched diagonal part of a batched tensor. This operation returns a tensor with the diagonal part of the batched input. The diagonal part is computed as follows: Assume input has k dimensions [I, J, K, ..., N, N], then the output is a tensor of rank k - 1 with dimensions [I, J, K, ..., N] where: diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]. The input must be at least a matrix. For example: # 'input' is [[[1, 0, 0, 0] [0,

tf.contrib.learn.monitors.CaptureVariable.end()

tf.contrib.learn.monitors.CaptureVariable.end(session=None)

tf.image.adjust_saturation()

tf.image.adjust_saturation(image, saturation_factor, name=None) Adjust saturation of an RGB image. This is a convenience method that converts an RGB image to float representation, converts it to HSV, add an offset to the saturation channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions. image is an RGB image. The image saturation is adjusted by converting the image to HSV and mult

tf.contrib.learn.monitors.ValidationMonitor.end()

tf.contrib.learn.monitors.ValidationMonitor.end(session=None)

tf.contrib.bayesflow.stochastic_tensor.WishartCholeskyTensor.input_dict

tf.contrib.bayesflow.stochastic_tensor.WishartCholeskyTensor.input_dict

tf.sparse_to_dense()

tf.sparse_to_dense(sparse_indices, output_shape, sparse_values, default_value=0, validate_indices=True, name=None) Converts a sparse representation into a dense tensor. Builds an array dense with shape output_shape such that # If sparse_indices is scalar dense[i] = (i == sparse_indices ? sparse_values : default_value) # If sparse_indices is a vector, then for each i dense[sparse_indices[i]] = sparse_values[i] # If sparse_indices is an n by d matrix, then for each i in [0, n) dense[sparse_ind