InternalGraph¶
- class InternalGraph(name, qualname)[source]¶
InternalGraph
is the main data structure used in the TracedModule. It is used to represent the execution procedure of Module’s forward method.For example, the following code
import megengine.random as rand import megengine.functional as F import megengine.module as M import megengine.traced_module as tm class MyModule(M.Module): def __init__(self): super().__init__() self.param = rand.normal(size=(3, 4)) self.linear = M.Linear(4, 5) def forward(self, x): return F.relu(self.linear(x + self.param)) net = MyModule() inp = F.zeros(shape = (3, 4)) traced_module = tm.trace_module(net, inp)
- Will produce the following
InternalGraph
: print(traced_module.graph)
MyModule.Graph (self, x) { %2: linear = getattr(self, "linear") -> (Linear) %3: param = getattr(self, "param") -> (Tensor) %4: add_out = x.__add__(param, ) %5: linear_out = linear(add_out, ) %6: relu_out = nn.relu(linear_out, ) return relu_out }
- add_input_node(shape, dtype='float32', name='args')[source]¶
Add an input node to the graph.
The new Node will be the last of the positional arguments.
- add_output_node(node)[source]¶
Add an output node to the Graph.
The Graph output will become a
tuple
after callingadd_output_node
. The first element of thetuple
is the original output, and the second is thenode
.For example, the following code
import megengine.functional as F import megengine.module as M import megengine.traced_module as tm class MyModule(M.Module): def forward(self, x): x = x + 1 return x net = MyModule() inp = F.zeros(shape = (1, )) traced_module = tm.trace_module(net, inp) graph = traced_module.graph inp_node = graph.inputs[1] out_node = graph.outputs[0] graph.add_output_node(inp_node) graph.add_output_node(out_node) out = traced_module(inp)
Will produce the following
InternalGraph
andout
:print(graph) print(out)
MyModule.Graph (self, x) { %2: add_out = x.__add__(1, ) return add_out, x, add_out } ((Tensor([1.], device=xpux:0), Tensor([0.], device=xpux:0)), Tensor([1.], device=xpux:0))
- exprs(recursive=True)[source]¶
Get the Exprs that constitute this graph.
- Parameters
recursive – whether to get the Exprs in the subgraph. Default: True
- Returns
A
ExprFilter
containing all Exprs of this graph.
- get_module_by_type(module_cls, recursive=True)[source]¶
Filter Nodes by the
module_type
ofModuleNode
.
- get_node_by_id(node_id=None, recursive=True)[source]¶
Filter Nodes by their
id
.The
id
of theNode
can be obtained by the following code# node : Node print("{:i}".format(node)) print(node.__format__("i")) # graph : InternalGraph print("{:i}".format(graph)) print(graph.__format__("i"))
- get_node_by_name(name=None, ignorecase=True, recursive=True)[source]¶
Filter Nodes by their full name.
The full name of the
Node
can be obtained by the following code# node : Node print("{:p}".format(node)) print(node.__format__("p")) # graph : InternalGraph print("{:p}".format(graph)) print(graph.__format__("p"))
- property inputs¶
Get the list of input Nodes of this graph.
- Return type
List
[Node
]- Returns
A list of
Node
.
- insert_exprs(expr=None)[source]¶
Initialize the trace mode and insertion position.
When used within a ‘with’ statement, this will temporary set the trace mode and then restore normal mode when the with statement exits:
with graph.insert_exprs(e): # set the trace mode ... # trace function or module ... # inert exprs into graph and resotre normal mode
- Parameters
expr (
Optional
[Expr
]) – theexpr
after which to insert. If None, the insertion position will be automatically set based on the input node.- Returns
A resource manager that will initialize trace mode on
__enter__
and restore normal mode on__exit__
.
- nodes(recursive=True)[source]¶
Get the Nodes that constitute this graph.
- Parameters
recursive – whether to get the Nodes in the subgraph. Default: True
- Returns
A
NodeFilter
containing all Nodes of this graph.
- property outputs¶
Get the list of output Nodes of this graph.
- Return type
List
[Node
]- Returns
A list of
Node
.
- property qualname¶
Get the qualname of this graph. The qualname can be used to get the submodule from the traced Module or Module.
Example
import megengine.module as M import megengine.traced_module as tm import megengine as mge class block(M.Module): def __init__(self): super().__init__() self.relu = M.ReLU() def forward(self, x): return self.relu(x) class module(M.Module): def __init__(self): super().__init__() self.block = block() def forward(self, x): x = self.block(x) return x net = module() traced_net = tm.trace_module(net, mge.Tensor([0.])) qualname = traced_net.block.graph.qualname # qualname = "module.block" qualname = qualname.split(".", 1)[-1] # qualname = "block" assert qualname in list(map(lambda x: x[0], net.named_modules())) assert qualname in list(map(lambda x: x[0], traced_net.named_modules()))
- Return type
- replace_node(repl_dict)[source]¶
Replace the Nodes in the graph.
- Parameters
repl_dict (
Dict
[Node
,Node
]) – the map {old_Node: new_Node} that specifies how to replace the Nodes.
- reset_outputs(outputs)[source]¶
Reset the output Nodes of the graph.
Note
This method only supports resetting the output of graphs that do not have a parent graph.
- Parameters
outputs – an object which inner element is Node. Support tuple, list dict, etc.
For example, the following code
import megengine.functional as F import megengine.module as M import megengine.traced_module as tm class MyModule(M.Module): def forward(self, x): x = x + 1 return x net = MyModule() inp = F.zeros(shape = (1, )) traced_module = tm.trace_module(net, inp) graph = traced_module.graph inp_node = graph.inputs[1] out_node = graph.outputs[0] graph.reset_outputs((out_node, {"input": inp_node})) out = traced_module(inp)
Will produce the following
InternalGraph
andout
:print(graph) print(out)
MyModule.Graph (self, x) { %2: add_out = x.__add__(1, ) return add_out, x } (Tensor([1.], device=xpux:0), {'input': Tensor([0.], device=xpux:0)})
- property top_graph¶
Get the parent graph of this graph.
- Returns
An
InternalGraph
.
- Will produce the following