tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__init__()

tf.contrib.rnn.CoupledInputForgetGateLSTMCell.__init__(num_units, use_peepholes=False, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=False, activation=tanh) Initialize the parameters for an LSTM cell. Args: num_units: int, The number of units in the LSTM cell use_peepholes: bool, set True to enable diagonal/peephole connections. initializer: (optional) The initializer to use for the weight and projection matrices. num

tensorflow::TensorShapeUtils::MakeShape()

static Status tensorflow::TensorShapeUtils::MakeShape(const int32 *dims, int64 n, TensorShape *out) Returns a TensorShape whose dimensions are dims[0], dims[1], ..., dims[n-1].

tf.contrib.bayesflow.stochastic_tensor.MultinomialTensor.distribution

tf.contrib.bayesflow.stochastic_tensor.MultinomialTensor.distribution

tf.contrib.bayesflow.stochastic_tensor.MultivariateNormalDiagTensor.distribution

tf.contrib.bayesflow.stochastic_tensor.MultivariateNormalDiagTensor.distribution

tf.contrib.bayesflow.stochastic_tensor.MixtureTensor.__init__()

tf.contrib.bayesflow.stochastic_tensor.MixtureTensor.__init__(name=None, dist_value_type=None, loss_fn=score_function, **dist_args)

tf.contrib.distributions.QuantizedDistribution.log_prob()

tf.contrib.distributions.QuantizedDistribution.log_prob(value, name='log_prob') Log probability density/mass function (depending on is_continuous). Additional documentation from QuantizedDistribution: For whole numbers y, P[Y = y] := P[X <= lower_cutoff], if y == lower_cutoff, := P[X > upper_cutoff - 1], y == upper_cutoff, := 0, if j < lower_cutoff or y > upper_cutoff, := P[y - 1 < X <= y], all other y. The base distribution's log_cdf method mus

tf.contrib.bayesflow.stochastic_tensor.BetaWithSoftplusABTensor.loss()

tf.contrib.bayesflow.stochastic_tensor.BetaWithSoftplusABTensor.loss(final_loss, name='Loss')

tf.contrib.distributions.Gamma

class tf.contrib.distributions.Gamma The Gamma distribution with parameter alpha and beta. The parameters are the shape and inverse scale parameters alpha, beta. The PDF of this distribution is: pdf(x) = (beta^alpha)(x^(alpha-1))e^(-x*beta)/Gamma(alpha), x > 0 and the CDF of this distribution is: cdf(x) = GammaInc(alpha, beta * x) / Gamma(alpha), x > 0 where GammaInc is the incomplete lower Gamma function. WARNING: This distribution may draw 0-valued samples for small alpha values. See

tf.contrib.learn.monitors.LoggingTrainable.step_begin()

tf.contrib.learn.monitors.LoggingTrainable.step_begin(step) Overrides BaseMonitor.step_begin. When overriding this method, you must call the super implementation. Args: step: int, the current value of the global step. Returns: A list, the result of every_n_step_begin, if that was called this step, or an empty list otherwise. Raises: ValueError: if called more than once during a step.

tf.contrib.graph_editor.swap()

tf.contrib.graph_editor.swap(sgv0, sgv1) Swap the inputs and outputs of sgv1 to sgv0 (see _reroute).