tf.contrib.distributions.ExponentialWithSoftplusLam

class tf.contrib.distributions.ExponentialWithSoftplusLam Exponential with softplus transform on lam.

tf.contrib.framework.VariableDeviceChooser.__init__()

tf.contrib.framework.VariableDeviceChooser.__init__(num_tasks=0, job_name='ps', device_type='CPU', device_index=0) Initialize VariableDeviceChooser. Usage: To use with 2 parameter servers: VariableDeviceChooser(2) To use without parameter servers: VariableDeviceChooser() VariableDeviceChooser(device_type='GPU') # For GPU placement Args: num_tasks: number of tasks. job_name: String, a name for the parameter server job. device_type: Optional device type string (e.g. "CPU" or "GPU") device_in

tf.InteractiveSession

class tf.InteractiveSession A TensorFlow Session for use in interactive contexts, such as a shell. The only difference with a regular Session is that an InteractiveSession installs itself as the default session on construction. The methods Tensor.eval() and Operation.run() will use that session to run ops. This is convenient in interactive shells and IPython notebooks, as it avoids having to pass an explicit Session object to run ops. For example: sess = tf.InteractiveSession() a = tf.constant

tf.contrib.distributions.LaplaceWithSoftplusScale.log_cdf()

tf.contrib.distributions.LaplaceWithSoftplusScale.log_cdf(value, name='log_cdf') Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ] Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1. Args: value: float or double Tensor. name: The name to give this op. Returns: logcdf: a Tensor of shape sample

tf.contrib.distributions.MultivariateNormalFull

class tf.contrib.distributions.MultivariateNormalFull The multivariate normal distribution on R^k. This distribution is defined by a 1-D mean mu and covariance matrix sigma. Evaluation of the pdf, determinant, and sampling are all O(k^3) operations.

tf.contrib.distributions.MultivariateNormalFull.pmf()

tf.contrib.distributions.MultivariateNormalFull.pmf(value, name='pmf') Probability mass function. Args: value: float or double Tensor. name: The name to give this op. Returns: pmf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if is_continuous.

tf.contrib.distributions.Normal.log_cdf()

tf.contrib.distributions.Normal.log_cdf(value, name='log_cdf') Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ] Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1. Args: value: float or double Tensor. name: The name to give this op. Returns: logcdf: a Tensor of shape sample_shape(x) + self.b

tf.contrib.distributions.Gamma.log_prob()

tf.contrib.distributions.Gamma.log_prob(value, name='log_prob') Log probability density/mass function (depending on is_continuous). Args: value: float or double Tensor. name: The name to give this op. Returns: log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.contrib.distributions.Exponential.sample_n()

tf.contrib.distributions.Exponential.sample_n(n, seed=None, name='sample_n') Generate n samples. Additional documentation from Gamma: See the documentation for tf.random_gamma for more details. Args: n: Scalar Tensor of type int32 or int64, the number of observations to sample. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with a prepended dimension (n,). Raises: TypeError: if n is not an integer type.

tf.sparse_split()

tf.sparse_split(split_dim, num_split, sp_input, name=None) Split a SparseTensor into num_split tensors along split_dim. If the sp_input.shape[split_dim] is not an integer multiple of num_split each slice starting from 0:shape[split_dim] % num_split gets extra one dimension. For example, if split_dim = 1 and num_split = 2 and the input is: input_tensor = shape = [2, 7] [ a d e ] [b c ] Graphically the output tensors are: output_tensor[0] = [ a ] [b c ] output_tensor[1] = [