tf.contrib.distributions.LaplaceWithSoftplusScale.log_prob()

tf.contrib.distributions.LaplaceWithSoftplusScale.log_prob(value, name='log_prob') Log probability density/mass function (depending on is_continuous). Args: value: float or double Tensor. name: The name to give this op. Returns: log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.contrib.learn.monitors.StopAtStep.begin()

tf.contrib.learn.monitors.StopAtStep.begin(max_steps=None) Called at the beginning of training. When called, the default graph is the one we are executing. Args: max_steps: int, the maximum global step this training will run until. Raises: ValueError: if we've already begun a run.

tf.FixedLengthRecordReader.read()

tf.FixedLengthRecordReader.read(queue, name=None) Returns the next record (key, value pair) produced by a reader. Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). Args: queue: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. name: A name for the operation (optional). Returns: A tuple of Tensors (key, value). key: A string scalar Tensor. v

tf.contrib.distributions.DirichletMultinomial.event_shape()

tf.contrib.distributions.DirichletMultinomial.event_shape(name='event_shape') Shape of a single sample from a single batch as a 1-D int32 Tensor. Args: name: name to give to the op Returns: event_shape: Tensor.

tf.contrib.bayesflow.stochastic_tensor.SampleAndReshapeValue

class tf.contrib.bayesflow.stochastic_tensor.SampleAndReshapeValue Ask the StochasticTensor for n samples and reshape the result. Sampling from a StochasticTensor increases the rank of the value by 1 (because each sample represents a new outer dimension). This ValueType requests n samples from StochasticTensors run within its context that the outer two dimensions are reshaped to intermix the samples with the outermost (usually batch) dimension. Example: # mu and sigma are both shaped (2, 3) mu

tf.add_n()

tf.add_n(inputs, name=None) Adds all input tensors element-wise. Args: inputs: A list of Tensor objects, each with same shape and type. name: A name for the operation (optional). Returns: A Tensor of same shape and type as the elements of inputs. Raises: ValueError: If inputs don't all have same shape and dtype or the shape cannot be inferred.

tf.contrib.bayesflow.stochastic_graph.surrogate_loss()

tf.contrib.bayesflow.stochastic_graph.surrogate_loss(sample_losses, stochastic_tensors=None, name='SurrogateLoss') Surrogate loss for stochastic graphs. This function will call loss_fn on each StochasticTensor upstream of sample_losses, passing the losses that it influenced. Note that currently surrogate_loss does not work with StochasticTensors instantiated in while_loops or other control structures. Args: sample_losses: a list or tuple of final losses. Each loss should be per example in the

tf.contrib.training.bucket_by_sequence_length()

tf.contrib.training.bucket_by_sequence_length(input_length, tensors, batch_size, bucket_boundaries, num_threads=1, capacity=32, shapes=None, dynamic_pad=False, allow_smaller_final_batch=False, keep_input=None, shared_name=None, name=None) Lazy bucketing of inputs according to their length. This method calls tf.contrib.training.bucket under the hood, after first subdividing the bucket boundaries into separate buckets and identifying which bucket the given input_length belongs to. See the docume

tf.accumulate_n()

tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None) Returns the element-wise sum of a list of tensors. Optionally, pass shape and tensor_dtype for shape and type checking, otherwise, these are inferred. NOTE: This operation is not differentiable and cannot be used if inputs depend on trainable variables. Please use tf.add_n for such cases. For example: # tensor 'a' is [[1, 2], [3, 4]] # tensor `b` is [[5, 0], [0, 6]] tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]] # Explicit

tf.contrib.distributions.QuantizedDistribution.sample()

tf.contrib.distributions.QuantizedDistribution.sample(sample_shape=(), seed=None, name='sample') Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample. Args: sample_shape: 0D or 1D int32 Tensor. Shape of the generated samples. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with prepended dimensions sample_shape.