tf.contrib.copy_graph.copy_op_to_graph()

tf.contrib.copy_graph.copy_op_to_graph(org_instance, to_graph, variables, scope='') Given an Operation 'org_instancefrom oneGraph, initializes and returns a copy of it from anotherGraph, under the specified scope (default""`). The copying is done recursively, so any Operation whose output is required to evaluate the org_instance, is also copied (unless already done). Since Variable instances are copied separately, those required to evaluate org_instance must be provided as input. Args: org_ins

tf.image.adjust_saturation()

tf.image.adjust_saturation(image, saturation_factor, name=None) Adjust saturation of an RGB image. This is a convenience method that converts an RGB image to float representation, converts it to HSV, add an offset to the saturation channel, converts back to RGB and then back to the original data type. If several adjustments are chained it is advisable to minimize the number of redundant conversions. image is an RGB image. The image saturation is adjusted by converting the image to HSV and mult

tf.contrib.distributions.LaplaceWithSoftplusScale.log_cdf()

tf.contrib.distributions.LaplaceWithSoftplusScale.log_cdf(value, name='log_cdf') Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ] Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1. Args: value: float or double Tensor. name: The name to give this op. Returns: logcdf: a Tensor of shape sample

tf.contrib.distributions.LaplaceWithSoftplusScale.log_survival_function()

tf.contrib.distributions.LaplaceWithSoftplusScale.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. Args: value: fl

tf.contrib.distributions.LaplaceWithSoftplusScale.log_pdf()

tf.contrib.distributions.LaplaceWithSoftplusScale.log_pdf(value, name='log_pdf') Log probability density function. Args: value: float or double Tensor. name: The name to give this op. Returns: log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if not is_continuous.

tf.contrib.bayesflow.stochastic_tensor.TransformedDistributionTensor.loss()

tf.contrib.bayesflow.stochastic_tensor.TransformedDistributionTensor.loss(final_loss, name='Loss')

tensorflow::Tensor::scalar()

TTypes< T >::ConstScalar tensorflow::Tensor::scalar() const

tf.contrib.distributions.RegisterKL.__init__()

tf.contrib.distributions.RegisterKL.__init__(dist_cls_a, dist_cls_b) Initialize the KL registrar. Args: dist_cls_a: the class of the first argument of the KL divergence. dist_cls_b: the class of the second argument of the KL divergence.

tf.contrib.distributions.Mixture.components

tf.contrib.distributions.Mixture.components

tf.contrib.rnn.LSTMBlockCell.__init__()

tf.contrib.rnn.LSTMBlockCell.__init__(num_units, forget_bias=1.0, use_peephole=False) Initialize the basic LSTM cell. Args: num_units: int, The number of units in the LSTM cell. forget_bias: float, The bias added to forget gates (see above). use_peephole: Whether to use peephole connections or not.