tf.string_to_hash_bucket_fast()

tf.string_to_hash_bucket_fast(input, num_buckets, name=None) Converts each string in the input Tensor to its hash mod by a number of buckets. The hash function is deterministic on the content of the string within the process and will never change. However, it is not suitable for cryptography. This function may be used when CPU time is scarce and inputs are trusted or unimportant. There is a risk of adversaries constructing inputs that all hash to the same bucket. To prevent this problem, use a

tf.nn.rnn_cell.RNNCell

class tf.nn.rnn_cell.RNNCell Abstract object representing an RNN cell. The definition of cell in this package differs from the definition used in the literature. In the literature, cell refers to an object with a single scalar output. The definition in this package refers to a horizontal array of such units. An RNN cell, in the most abstract setting, is anything that has a state and performs some operation that takes a matrix of inputs. This operation results in an output matrix with self.outp

tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler_logspace()

tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler_logspace(log_f, log_p, sampling_dist_q, z=None, n=None, seed=None, name='expectation_importance_sampler_logspace') Importance sampling with a positive function, in log-space. With p(z) := exp{log_p(z)}, and f(z) = exp{log_f(z)}, this Op returns Log[ n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ] ], z_i ~ q, \approx Log[ E_q[ f(Z) p(Z) / q(Z) ] ] = Log[E_p[f(Z)]] This integral is done in log-space with max-subtraction to bet

tf.contrib.distributions.Normal.log_pmf()

tf.contrib.distributions.Normal.log_pmf(value, name='log_pmf') Log probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: log_pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.ifft2d()

tf.ifft2d(input, name=None) Compute the inverse 2-dimensional discrete Fourier Transform over the inner-most 2 dimensions of input. Args: input: A Tensor of type complex64. A complex64 tensor. name: A name for the operation (optional). Returns: A Tensor of type complex64. A complex64 tensor of the same shape as input. The inner-most 2 dimensions of input are replaced with their inverse 2D Fourier Transform.

tf.contrib.distributions.MultivariateNormalDiagPlusVDVT.pmf()

tf.contrib.distributions.MultivariateNormalDiagPlusVDVT.pmf(value, name='pmf') Probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.contrib.bayesflow.stochastic_tensor.SampleValue

class tf.contrib.bayesflow.stochastic_tensor.SampleValue Draw n samples along a new outer dimension. This ValueType draws n samples from StochasticTensors run within its context, increasing the rank by one along a new outer dimension. Example: mu = tf.zeros((2,3)) sigma = tf.ones((2, 3)) with sg.value_type(sg.SampleValue(n=4)): dt = sg.DistributionTensor( distributions.Normal, mu=mu, sigma=sigma) # draws 4 samples each with shape (2, 3) and concatenates assertEqual(dt.value().get_shape()

tf.nn.rnn_cell.DropoutWrapper.zero_state()

tf.nn.rnn_cell.DropoutWrapper.zero_state(batch_size, dtype) Return zero-filled state tensor(s). Args: batch_size: int, float, or unit Tensor representing the batch size. dtype: the data type to use for the state. Returns: If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size x state_size] filled with zeros. If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the sha

tf.contrib.bayesflow.variational_inference.elbo()

tf.contrib.bayesflow.variational_inference.elbo(log_likelihood, variational_with_prior=None, keep_batch_dim=True, form=None, name='ELBO') Evidence Lower BOund. log p(x) >= ELBO. Optimization objective for inference of hidden variables by variational inference. This function is meant to be used in conjunction with DistributionTensor. The user should build out the inference network, using DistributionTensors as latent variables, and the generative network. elbo at minimum needs p(x|Z) and ass

tf.contrib.metrics.set_intersection()

tf.contrib.metrics.set_intersection(a, b, validate_indices=True) Compute set intersection of elements in last dimension of a and b. All but the last dimension of a and b must match. Args: a: Tensor or SparseTensor of the same type as b. If sparse, indices must be sorted in row-major order. b: Tensor or SparseTensor of the same type as a. Must be SparseTensor if a is SparseTensor. If sparse, indices must be sorted in row-major order. validate_indices: Whether to validate the order and range