tf.train.shuffle_batch()

tf.train.shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None) Creates batches by randomly shuffling tensors. This function adds the following to the current Graph: A shuffling queue into which tensors from tensors are enqueued. A dequeue_many operation to create batches from the queue. A QueueRunner to QUEUE_RUNNER collection, to enqueue the tensors from tensors.

tf.contrib.framework.get_variables_by_suffix()

tf.contrib.framework.get_variables_by_suffix(suffix, scope=None) Gets the list of variables that end with the given suffix. Args: suffix: suffix for filtering the variables to return. scope: an optional scope for filtering the variables to return. Returns: a copied list of variables with the given name and prefix.

tf.contrib.distributions.WishartCholesky.param_static_shapes()

tf.contrib.distributions.WishartCholesky.param_static_shapes(cls, sample_shape) param_shapes with static (i.e. TensorShape) shapes. Args: sample_shape: TensorShape or python list/tuple. Desired shape of a call to sample(). Returns: dict of parameter name to TensorShape. Raises: ValueError: if sample_shape is a TensorShape and is not fully defined.

tf.contrib.distributions.NormalWithSoftplusSigma.survival_function()

tf.contrib.distributions.NormalWithSoftplusSigma.survival_function(value, name='survival_function') Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x] = 1 - P[X <= x] = 1 - cdf(x). Args: value: float or double Tensor. name: The name to give this op. Returns: Tensorof shapesample_shape(x) + self.batch_shapewith values of typeself.dtype`.

tf.contrib.losses.absolute_difference()

tf.contrib.losses.absolute_difference(predictions, targets, weight=1.0, scope=None) Adds an Absolute Difference loss to the training procedure. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weight vector. If the shape of weight matches the shape of predictions, then the loss of each

tf.contrib.distributions.MultivariateNormalCholesky.validate_args

tf.contrib.distributions.MultivariateNormalCholesky.validate_args Python boolean indicated possibly expensive checks are enabled.

tf.summary.scalar()

tf.summary.scalar(display_name, tensor, description='', labels=None, collections=None, name=None) Outputs a Summary protocol buffer containing a single scalar value. The generated Summary has a Tensor.proto containing the input Tensor. Args: display_name: A name to associate with the data series. Will be used to organize output data and as a name in visualizers. tensor: A tensor containing a single floating point or integer value. description: An optional long description of the data being

tf.contrib.distributions.GammaWithSoftplusAlphaBeta.log_pmf()

tf.contrib.distributions.GammaWithSoftplusAlphaBeta.log_pmf(value, name='log_pmf') Log probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: log_pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.contrib.bayesflow.variational_inference.elbo()

tf.contrib.bayesflow.variational_inference.elbo(log_likelihood, variational_with_prior=None, keep_batch_dim=True, form=None, name='ELBO') Evidence Lower BOund. log p(x) >= ELBO. Optimization objective for inference of hidden variables by variational inference. This function is meant to be used in conjunction with DistributionTensor. The user should build out the inference network, using DistributionTensors as latent variables, and the generative network. elbo at minimum needs p(x|Z) and ass

tf.contrib.learn.TensorFlowRNNRegressor.weights_

tf.contrib.learn.TensorFlowRNNRegressor.weights_ Returns weights of the rnn layer.