tf.contrib.learn.monitors.PrintTensor.every_n_step_end()

tf.contrib.learn.monitors.PrintTensor.every_n_step_end(step, outputs)

tf.contrib.distributions.Uniform.sample_n()

tf.contrib.distributions.Uniform.sample_n(n, seed=None, name='sample_n') Generate n samples. Args: n: Scalar Tensor of type int32 or int64, the number of observations to sample. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with a prepended dimension (n,). Raises: TypeError: if n is not an integer type.

tf.contrib.distributions.QuantizedDistribution.cdf()

tf.contrib.distributions.QuantizedDistribution.cdf(value, name='cdf') Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x] Additional documentation from QuantizedDistribution: For whole numbers y, cdf(y) := P[Y <= y] = 1, if y >= upper_cutoff, = 0, if y < lower_cutoff, = P[X <= y], otherwise. Since Y only has mass at whole numbers, P[Y <= y] = P[Y <= floor(y)]. This dictates th

tf.contrib.bayesflow.stochastic_tensor.GammaTensor

class tf.contrib.bayesflow.stochastic_tensor.GammaTensor GammaTensor is a StochasticTensor backed by the distribution Gamma.

tensorflow::ThreadOptions

Options to configure a Thread . Note that the options are all hints, and the underlying implementation may choose to ignore it. Member Details size_t tensorflow::ThreadOptions::stack_size Thread stack size to use (in bytes). size_t tensorflow::ThreadOptions::guard_size Guard area size to use near thread stacks to use (in bytes)

tf.contrib.distributions.StudentTWithAbsDfSoftplusSigma.parameters

tf.contrib.distributions.StudentTWithAbsDfSoftplusSigma.parameters Dictionary of parameters used by this Distribution.

tf.train.shuffle_batch()

tf.train.shuffle_batch(tensors, batch_size, capacity, min_after_dequeue, num_threads=1, seed=None, enqueue_many=False, shapes=None, allow_smaller_final_batch=False, shared_name=None, name=None) Creates batches by randomly shuffling tensors. This function adds the following to the current Graph: A shuffling queue into which tensors from tensors are enqueued. A dequeue_many operation to create batches from the queue. A QueueRunner to QUEUE_RUNNER collection, to enqueue the tensors from tensors.

tf.contrib.distributions.Distribution.dtype

tf.contrib.distributions.Distribution.dtype The DType of Tensors handled by this Distribution.

tf.contrib.rnn.LayerNormBasicLSTMCell.__init__()

tf.contrib.rnn.LayerNormBasicLSTMCell.__init__(num_units, forget_bias=1.0, input_size=None, activation=tanh, layer_norm=True, norm_gain=1.0, norm_shift=0.0, dropout_keep_prob=1.0, dropout_prob_seed=None) Initializes the basic LSTM cell. Args: num_units: int, The number of units in the LSTM cell. forget_bias: float, The bias added to forget gates (see above). input_size: Deprecated and unused. activation: Activation function of the inner states. layer_norm: If True, layer normalization wil

tf.contrib.bayesflow.stochastic_tensor.MultinomialTensor.value_type

tf.contrib.bayesflow.stochastic_tensor.MultinomialTensor.value_type