tf.contrib.distributions.ExponentialWithSoftplusLam.param_shapes()

tf.contrib.distributions.ExponentialWithSoftplusLam.param_shapes(cls, sample_shape, name='DistributionParamShapes') Shapes of parameters given the desired shape of a call to sample(). Subclasses should override static method _param_shapes. Args: sample_shape: Tensor or python list/tuple. Desired shape of a call to sample(). name: name to prepend ops with. Returns: dict of parameter name to Tensor shapes.

tf.contrib.learn.monitors.CaptureVariable.end()

tf.contrib.learn.monitors.CaptureVariable.end(session=None)

tf.QueueBase.enqueue()

tf.QueueBase.enqueue(vals, name=None) Enqueues one element to this queue. If the queue is full when this operation executes, it will block until the element has been enqueued. At runtime, this operation may raise an error if the queue is closed before or during its execution. If the queue is closed before this operation runs, tf.errors.CancelledError will be raised. If this operation is blocked, and either (i) the queue is closed by a close operation with cancel_pending_enqueues=True, or (ii)

tf.string_to_hash_bucket_fast()

tf.string_to_hash_bucket_fast(input, num_buckets, name=None) Converts each string in the input Tensor to its hash mod by a number of buckets. The hash function is deterministic on the content of the string within the process and will never change. However, it is not suitable for cryptography. This function may be used when CPU time is scarce and inputs are trusted or unimportant. There is a risk of adversaries constructing inputs that all hash to the same bucket. To prevent this problem, use a

tf.QueueBase.dequeue_many()

tf.QueueBase.dequeue_many(n, name=None) Dequeues and concatenates n elements from this queue. This operation concatenates queue-element component tensors along the 0th dimension to make a single component tensor. All of the components in the dequeued tuple will have size n in the 0th dimension. If the queue is closed and there are less than n elements left, then an OutOfRange exception is raised. At runtime, this operation may raise an error if the queue is closed before or during its executio

tf.contrib.metrics.streaming_percentage_less()

tf.contrib.metrics.streaming_percentage_less(*args, **kwargs) Computes the percentage of values less than the given threshold. (deprecated arguments) SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-10-19. Instructions for updating: ignore_mask is being deprecated. Instead use weights with values 0.0 and 1.0 to mask values. For example, weights=tf.logical_not(mask). The streaming_percentage_less function creates two local variables, total and count that are used to compute the pe

tf.contrib.distributions.Laplace.survival_function()

tf.contrib.distributions.Laplace.survival_function(value, name='survival_function') Survival function. Given random variable X, the survival function is defined: survival_function(x) = P[X > x] = 1 - P[X <= x] = 1 - cdf(x). Args: value: float or double Tensor. name: The name to give this op. Returns: Tensorof shapesample_shape(x) + self.batch_shapewith values of typeself.dtype`.

tf.contrib.distributions.Distribution.get_event_shape()

tf.contrib.distributions.Distribution.get_event_shape() Shape of a single sample from a single batch as a TensorShape. Same meaning as event_shape. May be only partially defined. Returns: event_shape: TensorShape, possibly unknown.

tf.contrib.distributions.StudentT.allow_nan_stats

tf.contrib.distributions.StudentT.allow_nan_stats Python boolean describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is

tf.contrib.layers.unit_norm()

tf.contrib.layers.unit_norm(*args, **kwargs) Normalizes the given input across the specified dimension to unit length. Note that the rank of input must be known. Args: inputs: A Tensor of arbitrary size. dim: The dimension along which the input is normalized. epsilon: A small value to add to the inputs to avoid dividing by zero. scope: Optional scope for variable_scope. Returns: The normalized Tensor. Raises: ValueError: If dim is smaller than the number of dimensions in 'inputs'. Ali