tf.contrib.distributions.Mixture.log_survival_function()

tf.contrib.distributions.Mixture.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. Args: value: float or double Ten

tf.contrib.bayesflow.stochastic_tensor.DirichletTensor.mean()

tf.contrib.bayesflow.stochastic_tensor.DirichletTensor.mean(name='mean')

tf.sparse_reorder()

tf.sparse_reorder(sp_input, name=None) Reorders a SparseTensor into the canonical, row-major ordering. Note that by convention, all sparse ops preserve the canonical ordering along increasing dimension number. The only time ordering can be violated is during manual manipulation of the indices and values to add entries. Reordering does not affect the shape of the SparseTensor. For example, if sp_input has shape [4, 5] and indices / values: [0, 3]: b [0, 1]: a [3, 1]: d [2, 0]: c then the outpu

tf.sparse_softmax()

tf.sparse_softmax(sp_input, name=None) Applies softmax to a batched N-D SparseTensor. The inputs represent an N-D SparseTensor with logical shape [..., B, C] (where N >= 2), and with indices sorted in the canonical lexicographic order. This op is equivalent to applying the normal tf.nn.softmax() to each innermost logical submatrix with shape [B, C], but with the catch that the implicitly zero elements do not participate. Specifically, the algorithm is equivalent to: (1) Applies tf.nn.softma

tensorflow::Session::Create()

virtual Status tensorflow::Session::Create(const GraphDef &graph)=0 Create the graph to be used for the session. Returns an error if this session has already been created with a graph. To re-use the session with a different graph, the caller must Close() the session first.

tf.summary.scalar()

tf.summary.scalar(display_name, tensor, description='', labels=None, collections=None, name=None) Outputs a Summary protocol buffer containing a single scalar value. The generated Summary has a Tensor.proto containing the input Tensor. Args: display_name: A name to associate with the data series. Will be used to organize output data and as a name in visualizers. tensor: A tensor containing a single floating point or integer value. description: An optional long description of the data being

tf.scalar_mul()

tf.scalar_mul(scalar, x) Multiplies a scalar times a Tensor or IndexedSlices object. Intended for use in gradient code which might deal with IndexedSlices objects, which are easy to multiply by a scalar but more expensive to multiply with arbitrary tensors. Args: scalar: A 0-D scalar Tensor. Must have known shape. x: A Tensor or IndexedSlices to be scaled. Returns: scalar * x of the same type (Tensor or IndexedSlices) as x. Raises: ValueError: if scalar is not a 0-D scalar.

tf.contrib.bayesflow.stochastic_tensor.MultivariateNormalDiagPlusVDVTTensor.entropy()

tf.contrib.bayesflow.stochastic_tensor.MultivariateNormalDiagPlusVDVTTensor.entropy(name='entropy')

tf.contrib.metrics.streaming_sensitivity_at_specificity()

tf.contrib.metrics.streaming_sensitivity_at_specificity(predictions, labels, specificity, weights=None, num_thresholds=200, metrics_collections=None, updates_collections=None, name=None) Computes the the specificity at a given sensitivity. The streaming_sensitivity_at_specificity function creates four local variables, true_positives, true_negatives, false_positives and false_negatives that are used to compute the sensitivity at the given specificity value. The threshold for the given specifici

tf.contrib.distributions.Chi2WithAbsDf.log_cdf()

tf.contrib.distributions.Chi2WithAbsDf.log_cdf(value, name='log_cdf') Log cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: log_cdf(x) := Log[ P[X <= x] ] Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1. Args: value: float or double Tensor. name: The name to give this op. Returns: logcdf: a Tensor of shape sample_shape(x) +