tf.contrib.distributions.MultivariateNormalFull.sample_n()

tf.contrib.distributions.MultivariateNormalFull.sample_n(n, seed=None, name='sample_n') Generate n samples. Args: n: Scalar Tensor of type int32 or int64, the number of observations to sample. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with a prepended dimension (n,). Raises: TypeError: if n is not an integer type.

tf.contrib.distributions.MultivariateNormalFull.log_cdf()

tf.contrib.distributions.MultivariateNormalFull.log_cdf(value, name='log_cdf') Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ] Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1. Args: value: float or double Tensor. name: The name to give this op. Returns: logcdf: a Tensor of shape sample_s

tf.contrib.training.NextQueuedSequenceBatch.next_key

tf.contrib.training.NextQueuedSequenceBatch.next_key The key names of the next (in iteration) truncated unrolled examples. The format of the key is: "%05d_of_%05d:%s" % (sequence + 1, sequence_count, original_key) if sequence + 1 < sequence_count, otherwise: "STOP:%s" % original_key where original_key is the unique key read in by the prefetcher. Returns: A string vector of length batch_size, the keys.

tf.contrib.learn.monitors.ExportMonitor.__init__()

tf.contrib.learn.monitors.ExportMonitor.__init__(*args, **kwargs) Initializes ExportMonitor. (deprecated arguments) SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-23. Instructions for updating: The signature of the input_fn accepted by export is changing to be consistent with what's used by tf.Learn Estimator's train/evaluate. input_fn (and in most cases, input_feature_key) will both become required args. Args: every_n_steps: Run monitor every N steps. export_dir: str, f

tf.contrib.training.SequenceQueueingStateSaver.prefetch_op

tf.contrib.training.SequenceQueueingStateSaver.prefetch_op The op used to prefetch new data into the state saver. Running it once enqueues one new input example into the state saver. The first time this gets called, it additionally creates the prefetch_op. Subsequent calls simply return the previously created prefetch_op. It should be run in a separate thread via e.g. a QueueRunner. Returns: An Operation that performs prefetching.

tf.contrib.bayesflow.stochastic_tensor.BetaWithSoftplusABTensor.loss()

tf.contrib.bayesflow.stochastic_tensor.BetaWithSoftplusABTensor.loss(final_loss, name='Loss')

tf.contrib.learn.monitors.SummarySaver.begin()

tf.contrib.learn.monitors.SummarySaver.begin(max_steps=None) Called at the beginning of training. When called, the default graph is the one we are executing. Args: max_steps: int, the maximum global step this training will run until. Raises: ValueError: if we've already begun a run.

tf.contrib.bayesflow.stochastic_tensor.MultivariateNormalCholeskyTensor.distribution

tf.contrib.bayesflow.stochastic_tensor.MultivariateNormalCholeskyTensor.distribution

tf.contrib.losses.get_losses()

tf.contrib.losses.get_losses(scope=None, loss_collection='losses') Gets the list of losses from the loss_collection. Args: scope: an optional scope for filtering the losses to return. loss_collection: Optional losses collection. Returns: a list of loss tensors.

tf.contrib.learn.monitors.LoggingTrainable.step_begin()

tf.contrib.learn.monitors.LoggingTrainable.step_begin(step) Overrides BaseMonitor.step_begin. When overriding this method, you must call the super implementation. Args: step: int, the current value of the global step. Returns: A list, the result of every_n_step_begin, if that was called this step, or an empty list otherwise. Raises: ValueError: if called more than once during a step.