tf.contrib.crf.CrfForwardRnnCell

class tf.contrib.crf.CrfForwardRnnCell Computes the alpha values in a linear-chain CRF. See http://www.cs.columbia.edu/~mcollins/fb.pdf for reference.

tf.contrib.copy_graph.get_copied_op()

tf.contrib.copy_graph.get_copied_op(org_instance, graph, scope='') Given an Operation instance from some Graph, returns its namesake from graph, under the specified scope (default ""). If a copy of org_instance is present in graph under the given scope, it will be returned. Args: org_instance: An Operation from some Graph. graph: The Graph to be searched for a copr of org_instance. scope: The scope org_instance is present in. Returns: The `Operation` copy from `graph`.

tf.contrib.copy_graph.copy_variable_to_graph()

tf.contrib.copy_graph.copy_variable_to_graph(org_instance, to_graph, scope='') Given a Variable instance from one Graph, initializes and returns a copy of it from another Graph, under the specified scope (default ""). Args: org_instance: A Variable from some Graph. to_graph: The Graph to copy the Variable to. scope: A scope for the new Variable (default ""). Returns: The copied `Variable` from `to_graph`. Raises: TypeError: If org_instance is not a Variable.

tf.contrib.copy_graph.copy_op_to_graph()

tf.contrib.copy_graph.copy_op_to_graph(org_instance, to_graph, variables, scope='') Given an Operation 'org_instancefrom oneGraph, initializes and returns a copy of it from anotherGraph, under the specified scope (default""`). The copying is done recursively, so any Operation whose output is required to evaluate the org_instance, is also copied (unless already done). Since Variable instances are copied separately, those required to evaluate org_instance must be provided as input. Args: org_ins

tf.contrib.bayesflow.variational_inference.register_prior()

tf.contrib.bayesflow.variational_inference.register_prior(variational, prior) Associate a variational DistributionTensor with a Distribution prior. This is a helper function used in conjunction with elbo that allows users to specify the mapping between variational distributions and their priors without having to pass in variational_with_prior explicitly. Args: variational: DistributionTensor q(Z). Approximating distribution. prior: Distribution p(Z). Prior distribution. Returns: None Raise

tf.contrib.bayesflow.variational_inference.elbo_with_log_joint()

tf.contrib.bayesflow.variational_inference.elbo_with_log_joint(log_joint, variational=None, keep_batch_dim=True, form=None, name='ELBO') Evidence Lower BOund. log p(x) >= ELBO. This method is for models that have computed p(x,Z) instead of p(x|Z). See elbo for further details. Because only the joint is specified, analytic KL is not available. Args: log_joint: Tensor log p(x, Z). variational: list of DistributionTensor q(Z). If None, defaults to all DistributionTensor objects upstream of l

tf.contrib.bayesflow.variational_inference.ELBOForms.check_form()

tf.contrib.bayesflow.variational_inference.ELBOForms.check_form(form)

tf.contrib.bayesflow.variational_inference.ELBOForms

class tf.contrib.bayesflow.variational_inference.ELBOForms Constants to control the elbo calculation. analytic_kl uses the analytic KL divergence between the variational distribution(s) and the prior(s). analytic_entropy uses the analytic entropy of the variational distribution(s). sample uses the sample KL or the sample entropy is the joint is provided. See elbo for what is used with default.

tf.contrib.bayesflow.variational_inference.elbo()

tf.contrib.bayesflow.variational_inference.elbo(log_likelihood, variational_with_prior=None, keep_batch_dim=True, form=None, name='ELBO') Evidence Lower BOund. log p(x) >= ELBO. Optimization objective for inference of hidden variables by variational inference. This function is meant to be used in conjunction with DistributionTensor. The user should build out the inference network, using DistributionTensors as latent variables, and the generative network. elbo at minimum needs p(x|Z) and ass

tf.contrib.bayesflow.stochastic_tensor.WishartFullTensor.__init__()

tf.contrib.bayesflow.stochastic_tensor.WishartFullTensor.__init__(name=None, dist_value_type=None, loss_fn=score_function, **dist_args)