tf.contrib.graph_editor.detach_control_inputs()

tf.contrib.graph_editor.detach_control_inputs(sgv) Detach all the external control inputs of the subgraph sgv. Args: sgv: the subgraph view to be detached. This argument is converted to a subgraph using the same rules as the function subgraph.make_view.

tf.contrib.distributions.DirichletMultinomial.variance()

tf.contrib.distributions.DirichletMultinomial.variance(name='variance') Variance. Additional documentation from DirichletMultinomial: The variance for each batch member is defined as the following: Var(X_j) = n * alpha_j / alpha_0 * (1 - alpha_j / alpha_0) * (n + alpha_0) / (1 + alpha_0) where alpha_0 = sum_j alpha_j. The covariance between elements in a batch is defined as: Cov(X_i, X_j) = -n * alpha_i * alpha_j / alpha_0 ** 2 * (n + alpha_0) / (1 + alpha_0)

tf.contrib.distributions.MultivariateNormalFull.log_survival_function()

tf.contrib.distributions.MultivariateNormalFull.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. Args: value: floa

tf.contrib.distributions.MultivariateNormalDiagWithSoftplusStDev.parameters

tf.contrib.distributions.MultivariateNormalDiagWithSoftplusStDev.parameters Dictionary of parameters used by this Distribution.

tf.contrib.training.NextQueuedSequenceBatch.key

tf.contrib.training.NextQueuedSequenceBatch.key The key names of the given truncated unrolled examples. The format of the key is: "%05d_of_%05d:%s" % (sequence, sequence_count, original_key) where original_key is the unique key read in by the prefetcher. Returns: A string vector of length batch_size, the keys.

tf.contrib.learn.monitors.SummarySaver.step_end()

tf.contrib.learn.monitors.SummarySaver.step_end(step, output) Overrides BaseMonitor.step_end. When overriding this method, you must call the super implementation. Args: step: int, the current value of the global step. output: dict mapping string values representing tensor names to the value resulted from running these tensors. Values may be either scalars, for scalar tensors, or Numpy array, for non-scalar tensors. Returns: bool, the result of every_n_step_end, if that was called this step

tf.contrib.learn.LinearClassifier

class tf.contrib.learn.LinearClassifier Linear classifier model. Train a linear model to classify instances into one of multiple possible classes. When number of possible classes is 2, this is binary classification. Example: education = sparse_column_with_hash_bucket(column_name="education", hash_bucket_size=1000) occupation = sparse_column_with_hash_bucket(column_name="occupation", hash_bucket_size=1000) e

tf.contrib.learn.monitors.CaptureVariable.begin()

tf.contrib.learn.monitors.CaptureVariable.begin(max_steps=None) Called at the beginning of training. When called, the default graph is the one we are executing. Args: max_steps: int, the maximum global step this training will run until. Raises: ValueError: if we've already begun a run.

tf.contrib.distributions.NormalWithSoftplusSigma.log_pmf()

tf.contrib.distributions.NormalWithSoftplusSigma.log_pmf(value, name='log_pmf') Log probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: log_pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.contrib.distributions.RegisterKL.__call__()

tf.contrib.distributions.RegisterKL.__call__(kl_fn) Perform the KL registration. Args: kl_fn: The function to use for the KL divergence. Returns: kl_fn Raises: TypeError: if kl_fn is not a callable. ValueError: if a KL divergence function has already been registered for the given argument classes.