tf.contrib.distributions.Poisson.allow_nan_stats

tf.contrib.distributions.Poisson.allow_nan_stats Python boolean describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is u

tf.contrib.bayesflow.stochastic_tensor.MultivariateNormalDiagTensor.value_type

tf.contrib.bayesflow.stochastic_tensor.MultivariateNormalDiagTensor.value_type

tf.contrib.distributions.ExponentialWithSoftplusLam.param_static_shapes()

tf.contrib.distributions.ExponentialWithSoftplusLam.param_static_shapes(cls, sample_shape) param_shapes with static (i.e. TensorShape) shapes. Args: sample_shape: TensorShape or python list/tuple. Desired shape of a call to sample(). Returns: dict of parameter name to TensorShape. Raises: ValueError: if sample_shape is a TensorShape and is not fully defined.

tf.contrib.layers.fully_connected()

tf.contrib.layers.fully_connected(*args, **kwargs) Adds a fully connected layer. fully_connected creates a variable called weights, representing a fully connected weight matrix, which is multiplied by the inputs to produce a Tensor of hidden units. If a normalizer_fn is provided (such as batch_norm), it is then applied. Otherwise, if normalizer_fn is None and a biases_initializer is provided then a biases variable would be created and added the hidden units. Finally, if activation_fn is not No

tf.image.resize_image_with_crop_or_pad()

tf.image.resize_image_with_crop_or_pad(image, target_height, target_width) Crops and/or pads an image to a target width and height. Resizes an image to a target width and height by either centrally cropping the image or padding it evenly with zeros. If width or height is greater than the specified target_width or target_height respectively, this op centrally crops along that dimension. If width or height is smaller than the specified target_width or target_height respectively, this op centrall

tf.contrib.distributions.RegisterKL.__init__()

tf.contrib.distributions.RegisterKL.__init__(dist_cls_a, dist_cls_b) Initialize the KL registrar. Args: dist_cls_a: the class of the first argument of the KL divergence. dist_cls_b: the class of the second argument of the KL divergence.

tf.image.extract_glimpse()

tf.image.extract_glimpse(input, size, offsets, centered=None, normalized=None, uniform_noise=None, name=None) Extracts a glimpse from the input tensor. Returns a set of windows called glimpses extracted at location offsets from the input tensor. If the windows only partially overlaps the inputs, the non overlapping areas will be filled with random noise. The result is a 4-D tensor of shape [batch_size, glimpse_height, glimpse_width, channels]. The channels and batch dimensions are the same as

tf.contrib.distributions.Mixture.survival_function()

tf.contrib.distributions.Mixture.survival_function(value, name='survival_function') Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x] = 1 - P[X <= x] = 1 - cdf(x). Args: value: float or double Tensor. name: The name to give this op. Returns: Tensorof shapesample_shape(x) + self.batch_shapewith values of typeself.dtype`.

tf.contrib.losses.sum_of_pairwise_squares()

tf.contrib.losses.sum_of_pairwise_squares(*args, **kwargs) Adds a pairwise-errors-squared loss to the training procedure. (deprecated) THIS FUNCTION IS DEPRECATED. It will be removed after 2016-10-01. Instructions for updating: Use mean_pairwise_squared_error. Unlike the sum_of_squares loss, which is a measure of the differences between corresponding elements of predictions and targets, sum_of_pairwise_squares is a measure of the differences between pairs of corresponding elements of predictio

tf.contrib.losses.sigmoid_cross_entropy()

tf.contrib.losses.sigmoid_cross_entropy(logits, multi_class_labels, weight=1.0, label_smoothing=0, scope=None) Creates a cross-entropy loss using tf.nn.sigmoid_cross_entropy_with_logits. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the loss weights apply to each corresponding sample. If label_smoothing is nonzero, smooth the labels towards 1/2: new_multiclass_labels = mult