tf.sparse_fill_empty_rows()

tf.sparse_fill_empty_rows(sp_input, default_value, name=None) Fills empty rows in the input 2-D SparseTensor with a default value. This op adds entries with the specified default_value at index [row, 0] for any row in the input that does not already have a value. For example, suppose sp_input has shape [5, 6] and non-empty values: [0, 1]: a [0, 3]: b [2, 0]: c [3, 1]: d Rows 1 and 4 are empty, so the output will be of shape [5, 6] with values: [0, 1]: a [0, 3]: b [1, 0]: default_value [2, 0]:

tf.contrib.distributions.MultivariateNormalCholesky.sample_n()

tf.contrib.distributions.MultivariateNormalCholesky.sample_n(n, seed=None, name='sample_n') Generate n samples. Args: n: Scalar Tensor of type int32 or int64, the number of observations to sample. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with a prepended dimension (n,). Raises: TypeError: if n is not an integer type.

tf.contrib.distributions.QuantizedDistribution.sample()

tf.contrib.distributions.QuantizedDistribution.sample(sample_shape=(), seed=None, name='sample') Generate samples of the specified shape. Note that a call to sample() without arguments will generate a single sample. Args: sample_shape: 0D or 1D int32 Tensor. Shape of the generated samples. seed: Python integer seed for RNG name: name to give to the op. Returns: samples: a Tensor with prepended dimensions sample_shape.

tf.contrib.distributions.WishartFull.event_shape()

tf.contrib.distributions.WishartFull.event_shape(name='event_shape') Shape of a single sample from a single batch as a 1-D int32 Tensor. Args: name: name to give to the op Returns: event_shape: Tensor.

tensorflow::Tensor::unaligned_shaped()

TTypes< T, NDIMS >::UnalignedTensor tensorflow::Tensor::unaligned_shaped(gtl::ArraySlice< int64 > new_sizes)

tf.contrib.framework.get_model_variables()

tf.contrib.framework.get_model_variables(scope=None, suffix=None) Gets the list of model variables, filtered by scope and/or suffix. Args: scope: an optional scope for filtering the variables to return. suffix: an optional suffix for filtering the variables to return. Returns: a list of variables in collection with scope and suffix.

tf.VarLenFeature.__new__()

tf.VarLenFeature.__new__(_cls, dtype) Create new instance of VarLenFeature(dtype,)

tf.contrib.losses.log_loss()

tf.contrib.losses.log_loss(predictions, targets, weight=1.0, epsilon=1e-07, scope=None) Adds a Log Loss term to the training procedure. weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the weight vector. If the shape of weight matches the shape of predictions, then the loss of each measurab

tf.contrib.learn.TensorFlowEstimator.export()

tf.contrib.learn.TensorFlowEstimator.export(*args, **kwargs) Exports inference graph into given dir. (deprecated arguments) SOME ARGUMENTS ARE DEPRECATED. They will be removed after 2016-09-23. Instructions for updating: The signature of the input_fn accepted by export is changing to be consistent with what's used by tf.Learn Estimator's train/evaluate. input_fn (and in most cases, input_feature_key) will become required args, and use_deprecated_input_fn will default to False and be removed al

tf.contrib.distributions.TransformedDistribution.log_survival_function()

tf.contrib.distributions.TransformedDistribution.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. Args: value: flo