tf.contrib.distributions.LaplaceWithSoftplusScale.log_prob()

tf.contrib.distributions.LaplaceWithSoftplusScale.log_prob(value, name='log_prob') Log probability density/mass function (depending on is_continuous). Args: value: float or double Tensor. name: The name to give this op. Returns: log_prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tensorflow::TensorShape::dims()

int tensorflow::TensorShape::dims() const Return the number of dimensions in the tensor.

tf.contrib.learn.infer()

tf.contrib.learn.infer(restore_checkpoint_path, output_dict, feed_dict=None) Restore graph from restore_checkpoint_path and run output_dict tensors. If restore_checkpoint_path is supplied, restore from checkpoint. Otherwise, init all variables. Args: restore_checkpoint_path: A string containing the path to a checkpoint to restore. output_dict: A dict mapping string names to Tensor objects to run. Tensors must all be from the same graph. feed_dict: dict object mapping Tensor objects to input

tf.contrib.bayesflow.variational_inference.elbo()

tf.contrib.bayesflow.variational_inference.elbo(log_likelihood, variational_with_prior=None, keep_batch_dim=True, form=None, name='ELBO') Evidence Lower BOund. log p(x) >= ELBO. Optimization objective for inference of hidden variables by variational inference. This function is meant to be used in conjunction with DistributionTensor. The user should build out the inference network, using DistributionTensors as latent variables, and the generative network. elbo at minimum needs p(x|Z) and ass

tf.contrib.bayesflow.stochastic_tensor.NormalTensor.dtype

tf.contrib.bayesflow.stochastic_tensor.NormalTensor.dtype

tf.ReaderBase.restore_state()

tf.ReaderBase.restore_state(state, name=None) Restore a reader to a previously saved state. Not all Readers support being restored, so this can produce an Unimplemented error. Args: state: A string Tensor. Result of a SerializeState of a Reader with matching type. name: A name for the operation (optional). Returns: The created Operation.

tf.contrib.framework.arg_scope()

tf.contrib.framework.arg_scope(list_ops_or_scope, **kwargs) Stores the default arguments for the given set of list_ops. For usage, please see examples at top of the file. Args: list_ops_or_scope: List or tuple of operations to set argument scope for or a dictionary containg the current scope. When list_ops_or_scope is a dict, kwargs must be empty. When list_ops_or_scope is a list or tuple, then every op in it need to be decorated with @add_arg_scope to work. **kwargs: keyword=value that will

tf.contrib.distributions.Distribution.prob()

tf.contrib.distributions.Distribution.prob(value, name='prob') Probability density/mass function (depending on is_continuous). Args: value: float or double Tensor. name: The name to give this op. Returns: prob: a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

tf.contrib.rnn.LayerNormBasicLSTMCell.__init__()

tf.contrib.rnn.LayerNormBasicLSTMCell.__init__(num_units, forget_bias=1.0, input_size=None, activation=tanh, layer_norm=True, norm_gain=1.0, norm_shift=0.0, dropout_keep_prob=1.0, dropout_prob_seed=None) Initializes the basic LSTM cell. Args: num_units: int, The number of units in the LSTM cell. forget_bias: float, The bias added to forget gates (see above). input_size: Deprecated and unused. activation: Activation function of the inner states. layer_norm: If True, layer normalization wil

tf.contrib.crf.CrfForwardRnnCell.__call__()

tf.contrib.crf.CrfForwardRnnCell.__call__(inputs, state, scope=None) Build the CrfForwardRnnCell. Args: inputs: A [batch_size, num_tags] matrix of unary potentials. state: A [batch_size, num_tags] matrix containing the previous alpha values. scope: Unused variable scope of this cell. Returns: new_alphas, new_alphas: A pair of [batch_size, num_tags] matrices values containing the new alpha values.