class tf.contrib.graph_editor.SubGraphView
A subgraph view on an existing tf.Graph.
An instance of this class is a subgraph view on an existing tf.Graph. "subgraph" means that it can represent part of the whole tf.Graph. "view" means that it only provides a passive observation and do not to act on the tf.Graph. Note that in this documentation, the term "subgraph" is often used as substitute to "subgraph view".
A subgraph contains: * a list of input tensors, accessible via the "inputs" property. * a list of output tensors, accessible via the "outputs" property. * and the operations in between, accessible via the "ops" property.
An subgraph can be seen as a function F(i0, i1, ...) -> o0, o1, ... It is a function which takes as input some input tensors and returns as output some output tensors. The computation that the function performs is encoded in the operations of the subgraph.
The tensors (input or output) can be of two kinds: - connected: a connected tensor connects to at least one operation contained in the subgraph. One example is a subgraph representing a single operation and its inputs and outputs: all the input and output tensors of the op are "connected". - passthrough: a passthrough tensor does not connect to any operation contained in the subgraph. One example is a subgraph representing a single tensor: this tensor is passthrough. By default a passthrough tensor is present both in the input and output tensors of the subgraph. It can however be remapped to only appear as an input (or output) only.
The input and output tensors can be remapped. For instance, some input tensor can be ommited. For instance, a subgraph representing an operation with two inputs can be remapped to only take one input. Note that this does not change at all the underlying tf.Graph (remember, it is a view). It means that the other input is being ignored, or is being treated as "given". The analogy with functions can be extended like this: F(x,y) is the original function. Remapping the inputs from [x, y] to just [x] means that the subgraph now represent the function F_y(x) (y is "given").
The output tensors can also be remapped. For instance, some output tensor can be ommited. Other output tensor can be duplicated as well. As mentioned before, this does not change at all the underlying tf.Graph. The analogy with functions can be extended like this: F(...)->x,y is the original function. Remapping the outputs from [x, y] to just [y,y] means that the subgraph now represent the function M(F(...)) where M is the function M(a,b)->b,b.
It is useful to describe three other kind of tensors: * internal: an internal tensor is a tensor connecting operations contained in the subgraph. One example in the subgraph representing the two operations A and B connected sequentially: -> A -> B ->. The middle arrow is an internal tensor. * actual input: an input tensor of the subgraph, regardless of whether it is listed in "inputs" or not (masked-out). * actual output: an output tensor of the subgraph, regardless of whether it is listed in "outputs" or not (masked-out). * hidden input: an actual input which has been masked-out using an input remapping. In other word, a hidden input is a non-internal tensor not listed as a input tensor and one of whose consumers belongs to the subgraph. * hidden output: a actual output which has been masked-out using an output remapping. In other word, a hidden output is a non-internal tensor not listed as an output and one of whose generating operations belongs to the subgraph.
Here are some usefull guarantees about an instance of a SubGraphView: * the input (or output) tensors are not internal. * the input (or output) tensors are either "connected" or "passthrough". * the passthrough tensors are not connected to any of the operation of the subgraph.
Note that there is no guarantee that an operation in a subgraph contributes at all to its inputs or outputs. For instance, remapping both the inputs and outputs to empty lists will produce a subgraph which still contains all the original operations. However, the remove_unused_ops function can be used to make a new subgraph view whose operations are connected to at least one of the input or output tensors.
An instance of this class is meant to be a lightweight object which is not modified in-place by the user. Rather, the user can create new modified instances of a given subgraph. In that sense, the class SubGraphView is meant to be used like an immutable python object.
A common problem when using views is that they can get out-of-sync with the data they observe (in this case, a tf.Graph). This is up to the user to insure that this doesn't happen. To keep on the safe sife, it is recommended that the life time of subgraph views are kept very short. One way to achieve this is to use subgraphs within a "with make_sgv(...) as sgv:" Python context.
To alleviate the out-of-sync problem, some functions are granted the right to modified subgraph in place. This is typically the case of graph manipulation functions which, given some subgraphs as arguments, can modify the underlying tf.Graph. Since this modification is likely to render the subgraph view invalid, those functions can modify the argument in place to reflect the change. For instance, calling the function swap_inputs(svg0, svg1) will modify svg0 and svg1 in place to reflect the fact that their inputs have now being swapped.
Please login to continue.