tf.contrib.learn.DNNRegressor.weights_

tf.contrib.learn.DNNRegressor.weights_

tf.contrib.distributions.Laplace.log_cdf()

tf.contrib.distributions.Laplace.log_cdf(value, name='log_cdf') Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ] Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1. Args: value: float or double Tensor. name: The name to give this op. Returns: logcdf: a Tensor of shape sample_shape(x) + self.

tf.contrib.distributions.WishartFull.__init__()

tf.contrib.distributions.WishartFull.__init__(df, scale, cholesky_input_output_matrices=False, validate_args=False, allow_nan_stats=True, name='WishartFull') Construct Wishart distributions. Args: df: float or double Tensor. Degrees of freedom, must be greater than or equal to dimension of the scale matrix. scale: float or double Tensor. The symmetric positive definite scale matrix of the distribution. cholesky_input_output_matrices: Boolean. Any function which whose input or output is a ma

tf.contrib.distributions.MultivariateNormalCholesky.log_survival_function()

tf.contrib.distributions.MultivariateNormalCholesky.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. Args: value:

tf.contrib.distributions.Categorical.pdf()

tf.contrib.distributions.Categorical.pdf(value, name='pdf') Probability density function. Args: value: float or double Tensor. name: The name to give this op. Returns: prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype. Raises: TypeError: if not is_continuous.

tf.contrib.distributions.StudentTWithAbsDfSoftplusSigma

class tf.contrib.distributions.StudentTWithAbsDfSoftplusSigma StudentT with df = floor(abs(df)) and sigma = softplus(sigma).

tf.assert_greater()

tf.assert_greater(x, y, data=None, summarize=None, message=None, name=None) Assert the condition x > y holds element-wise. Example of adding a dependency to an operation: with tf.control_dependencies([tf.assert_greater(x, y)]): output = tf.reduce_sum(x) Example of adding dependency to the tensor being checked: x = tf.with_dependencies([tf.assert_greater(x, y)], x) This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] > y[i]. If both x and y

tf.contrib.distributions.Bernoulli

class tf.contrib.distributions.Bernoulli Bernoulli distribution. The Bernoulli distribution is parameterized by p, the probability of a positive event.

tf.contrib.distributions.BernoulliWithSigmoidP.cdf()

tf.contrib.distributions.BernoulliWithSigmoidP.cdf(value, name='cdf') Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x] Args: value: float or double Tensor. name: The name to give this op. Returns: cdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.contrib.graph_editor.Transformer.__call__()

tf.contrib.graph_editor.Transformer.__call__(sgv, dst_graph, dst_scope, src_scope='', reuse_dst_scope=False) Execute the transformation. Args: sgv: the source subgraph-view. dst_graph: the destination graph. dst_scope: the destination scope. src_scope: the source scope, which specify the path from which the relative path of the transformed nodes are computed. For instance, if src_scope is a/ and dst_scoped is b/, then the node a/x/y will have a relative path of x/y and will be transformed