tf.contrib.training.resample_at_rate()

tf.contrib.training.resample_at_rate(inputs, rates, scope=None, seed=None, back_prop=False) Given inputs tensors, stochastically resamples each at a given rate. For example, if the inputs are [[a1, a2], [b1, b2]] and the rates tensor contains [3, 1], then the return value may look like [[a1, a2, a1, a1], [b1, b2, b1, b1]]. However, many other outputs are possible, since this is stochastic -- averaged over many repeated calls, each set of inputs should appear in the output rate times the number

tf.contrib.distributions.MultivariateNormalFull.batch_shape()

tf.contrib.distributions.MultivariateNormalFull.batch_shape(name='batch_shape') Shape of a single sample from a single event index as a 1-D Tensor. The product of the dimensions of the batch_shape is the number of independent distributions of this kind the instance represents. Args: name: name to give to the op Returns: batch_shape: Tensor.

tf.contrib.framework.get_graph_from_inputs()

tf.contrib.framework.get_graph_from_inputs(op_input_list, graph=None) Returns the appropriate graph to use for the given inputs. If graph is provided, we validate that all inputs in op_input_list are from the same graph. Otherwise, we attempt to select a graph from the first Operation- or Tensor-valued input in op_input_list, and validate that all other such inputs are in the same graph. If the graph was not specified and it could not be inferred from op_input_list, we attempt to use the defau

tf.contrib.distributions.InverseGammaWithSoftplusAlphaBeta.log_survival_function()

tf.contrib.distributions.InverseGammaWithSoftplusAlphaBeta.log_survival_function(value, name='log_survival_function') Log survival function. Given random variable X, the survival function is defined: log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1. Args:

tf.TextLineReader.__init__()

tf.TextLineReader.__init__(skip_header_lines=None, name=None) Create a TextLineReader. Args: skip_header_lines: An optional int. Defaults to 0. Number of lines to skip from the beginning of every file. name: A name for the operation (optional).

tf.contrib.distributions.LaplaceWithSoftplusScale.loc

tf.contrib.distributions.LaplaceWithSoftplusScale.loc Distribution parameter for the location.

tf.contrib.learn.DNNRegressor.fit()

tf.contrib.learn.DNNRegressor.fit(x=None, y=None, input_fn=None, steps=None, batch_size=None, monitors=None, max_steps=None) See Trainable. Raises: ValueError: If x or y are not None while input_fn is not None. ValueError: If both steps and max_steps are not None.

tf.contrib.distributions.Mixture.cdf()

tf.contrib.distributions.Mixture.cdf(value, name='cdf') Cumulative distribution function. Given random variable X, the cumulative distribution function cdf is: cdf(x) := P[X <= x] Args: value: float or double Tensor. name: The name to give this op. Returns: cdf: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.contrib.distributions.Poisson.allow_nan_stats

tf.contrib.distributions.Poisson.allow_nan_stats Python boolean describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is u

tf.contrib.bayesflow.stochastic_tensor.SampleAndReshapeValue

class tf.contrib.bayesflow.stochastic_tensor.SampleAndReshapeValue Ask the StochasticTensor for n samples and reshape the result. Sampling from a StochasticTensor increases the rank of the value by 1 (because each sample represents a new outer dimension). This ValueType requests n samples from StochasticTensors run within its context that the outer two dimensions are reshaped to intermix the samples with the outermost (usually batch) dimension. Example: # mu and sigma are both shaped (2, 3) mu