tf.contrib.bayesflow.stochastic_graph.surrogate_loss()

tf.contrib.bayesflow.stochastic_graph.surrogate_loss(sample_losses, stochastic_tensors=None, name='SurrogateLoss') Surrogate loss for stochastic graphs. This function will call loss_fn on each StochasticTensor upstream of sample_losses, passing the losses that it influenced. Note that currently surrogate_loss does not work with StochasticTensors instantiated in while_loops or other control structures. Args: sample_losses: a list or tuple of final losses. Each loss should be per example in the

tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler_logspace()

tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler_logspace(log_f, log_p, sampling_dist_q, z=None, n=None, seed=None, name='expectation_importance_sampler_logspace') Importance sampling with a positive function, in log-space. With p(z) := exp{log_p(z)}, and f(z) = exp{log_f(z)}, this Op returns Log[ n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ] ], z_i ~ q, \approx Log[ E_q[ f(Z) p(Z) / q(Z) ] ] = Log[E_p[f(Z)]] This integral is done in log-space with max-subtraction to bet

tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler()

tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler(f, log_p, sampling_dist_q, z=None, n=None, seed=None, name='expectation_importance_sampler') Monte Carlo estimate of E_p[f(Z)] = E_q[f(Z) p(Z) / q(Z)]. With p(z) := exp{log_p(z)}, this Op returns n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ], z_i ~ q, \approx E_q[ f(Z) p(Z) / q(Z) ] = E_p[f(Z)] This integral is done in log-space with max-subtraction to better handle the often extreme values that f(z) p(z) / q(z) can take o

tf.contrib.bayesflow.monte_carlo.expectation()

tf.contrib.bayesflow.monte_carlo.expectation(f, p, z=None, n=None, seed=None, name='expectation') Monte Carlo estimate of an expectation: E_p[f(Z)] with sample mean. This Op returns n^{-1} sum_{i=1}^n f(z_i), where z_i ~ p \approx E_p[f(Z)] User supplies either Tensor of samples z, or number of samples to draw n Args: f: Callable mapping samples from p to Tensors. p: tf.contrib.distributions.BaseDistribution. z: Tensor of samples from p, produced by p.sample_n. n: Integer Tensor. Number

tf.contrib.bayesflow.entropy.renyi_ratio()

tf.contrib.bayesflow.entropy.renyi_ratio(log_p, q, alpha, z=None, n=None, seed=None, name='renyi_ratio') Monte Carlo estimate of the ratio appearing in Renyi divergence. This can be used to compute the Renyi (alpha) divergence, or a log evidence approximation based on Renyi divergence.

tf.contrib.bayesflow.entropy.renyi_alpha()

tf.contrib.bayesflow.entropy.renyi_alpha(step, decay_time, alpha_min, alpha_max=0.99999, name='renyi_alpha') Exponentially decaying Tensor appropriate for Renyi ratios. When minimizing the Renyi divergence for 0 <= alpha < 1 (or maximizing the Renyi equivalent of elbo) in high dimensions, it is not uncommon to experience NaN and inf values when alpha is far from 1. For that reason, it is often desirable to start the optimization with alpha very close to 1, and reduce it to a final alpha_

tf.contrib.bayesflow.entropy.entropy_shannon()

tf.contrib.bayesflow.entropy.entropy_shannon(p, z=None, n=None, seed=None, form=None, name='entropy_shannon') Monte Carlo or deterministic computation of Shannon's entropy. Depending on the kwarg form, this Op returns either the analytic entropy of the distribution p, or the sampled entropy: -n^{-1} sum_{i=1}^n p.log_prob(z_i), where z_i ~ p, \approx - E_p[ Log[p(Z)] ] = Entropy[p] User supplies either Tensor of samples z, or number of samples to draw n Args: p: tf.contrib.distribut

tf.contrib.bayesflow.entropy.elbo_ratio()

tf.contrib.bayesflow.entropy.elbo_ratio(log_p, q, z=None, n=None, seed=None, form=None, name='elbo_ratio') Estimate of the ratio appearing in the ELBO and KL divergence. With p(z) := exp{log_p(z)}, this Op returns an approximation of E_q[ Log[p(Z) / q(Z)] ] The term E_q[ Log[p(Z)] ] is always computed as a sample mean. The term E_q[ Log[q(z)] ] can be computed with samples, or an exact formula if q.entropy() is defined. This is controlled with the kwarg form. This log-ratio appears in differe

tf.constant()

tf.constant(value, dtype=None, shape=None, name='Const') Creates a constant tensor. The resulting tensor is populated with values of type dtype, as specified by arguments value and (optionally) shape (see examples below). The argument value can be a constant value, or a list of values of type dtype. If value is a list, then the length of the list must be less than or equal to the number of elements implied by the shape argument (if specified). In the case where the list length is less than the

tf.conj()

tf.conj(x, name=None) Returns the complex conjugate of a complex number. Given a tensor input of complex numbers, this operation returns a tensor of complex numbers that are the complex conjugate of each element in input. The complex numbers in input must be of the form \(a + bj\), where a is the real part and b is the imaginary part. The complex conjugate returned by this operation is of the form \(a - bj\). For example: # tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j] tf.conj(input) ==>