InternalGraph

class InternalGraph(name, qualname)[source]

InternalGraph is the main data structure used in the TracedModule. It is used to represent the execution procedure of Module’s forward method.

For example, the following code

import megengine.random as rand
import megengine.functional as F
import megengine.module as M

import megengine.traced_module as tm

class MyModule(M.Module):
    def __init__(self):
        super().__init__()
        self.param = rand.normal(size=(3, 4))
        self.linear = M.Linear(4, 5)

    def forward(self, x):
        return F.relu(self.linear(x + self.param))

net = MyModule()

inp = F.zeros(shape = (3, 4))
traced_module = tm.trace_module(net, inp)
Will produce the following InternalGraph:

print(traced_module.graph)

MyModule.Graph (self, x) {
        %2:     linear = getattr(self, "linear") -> (Linear)
        %3:     param = getattr(self, "param") -> (Tensor)
        %4:     add_out = x.__add__(param, )
        %5:     linear_out = linear(add_out, )
        %6:     relu_out = nn.relu(linear_out, )
        return relu_out
}
add_input_node(shape, dtype='float32', name='args')[source]

Add an input node to the graph.

The new Node will be the last of the positional arguments.

Parameters
  • shape (Tuple[int]) – the shape of the new input Node.

  • dtype (str) – the dtype of the new input Node. Default: float32

  • name (str) – the name of the new input Node. When the name is used in the graph, a suffix will be added to it.

add_output_node(node)[source]

Add an output node to the Graph.

The Graph output will become a tuple after calling add_output_node. The first element of the tuple is the original output, and the second is the node.

For example, the following code

import megengine.functional as F
import megengine.module as M
import megengine.traced_module as tm

class MyModule(M.Module):
    def forward(self, x):
        x = x + 1
        return x

net = MyModule()

inp = F.zeros(shape = (1, ))
traced_module = tm.trace_module(net, inp)
graph = traced_module.graph
inp_node = graph.inputs[1]
out_node = graph.outputs[0]
graph.add_output_node(inp_node)
graph.add_output_node(out_node)
out = traced_module(inp)

Will produce the following InternalGraph and out:

print(graph)
print(out)
MyModule.Graph (self, x) {
        %2:     add_out = x.__add__(1, )
        return add_out, x, add_out
}
((Tensor([1.], device=xpux:0), Tensor([0.], device=xpux:0)), Tensor([1.], device=xpux:0))
compile()[source]

Delete unused expr.

eval(*inputs)[source]

Call this method to execute the graph.

Parameters

inputs (Tuple[Tensor]) – the tensors corresponding to the graph.inputs[1:].

exprs(recursive=True)[source]

Get the Exprs that constitute this graph.

Parameters

recursive – whether to get the Exprs in the subgraph. Default: True

Returns

A ExprFilter containing all Exprs of this graph.

get_dep_exprs(nodes)[source]

Get the dependent Exprs of the nodes.

Parameters

nodes (Sequence[Node]) – a list of Node.

Return type

List[Expr]

Returns

A list of dependent Expr.

get_expr_by_id(expr_id=None, recursive=True)[source]

Filter Exprs by their id.

Parameters
  • expr_id (Optional[List[int]]) – a list of int.

  • recursive – whether to get the Exprs in the subgraph. Default: True

Returns

A ExprFilterExprId.

get_function_by_type(func=None, recursive=True)[source]

Filter Exprs by the type of CallFunction.

Parameters
  • func (Optional[Callable]) – a built-in function, such as F.relu.

  • recursive – whether to get the Exprs in the subgraph. Default: True

Returns

A ExprFilterCallFunction.

get_method_by_type(method=None, recursive=True)[source]

Filter Exprs by the type of CallMethod.

Parameters
  • method (Optional[str]) – a method string, such as “__add__”.

  • recursive – whether to get the Exprs in the subgraph. Default: True

Returns

A ExprFilterCallMethod.

get_module_by_type(module_cls, recursive=True)[source]

Filter Nodes by the module_type of ModuleNode.

Parameters
  • module_cls (Module) – a subclass of Module.

  • recursive – whether to get the Nodes in the subgraph. Default: True

Returns

A NodeFilterType.

get_node_by_id(node_id=None, recursive=True)[source]

Filter Nodes by their id.

The id of the Node can be obtained by the following code

# node : Node
print("{:i}".format(node))
print(node.__format__("i"))
# graph : InternalGraph
print("{:i}".format(graph))
print(graph.__format__("i"))
Parameters
  • node_id (Optional[List[int]]) – a list of int.

  • recursive – whether to get the Nodes in the subgraph. Default: True

Returns

A NodeFilterNodeId.

get_node_by_name(name=None, ignorecase=True, recursive=True)[source]

Filter Nodes by their full name.

The full name of the Node can be obtained by the following code

# node : Node
print("{:p}".format(node))
print(node.__format__("p"))
# graph : InternalGraph
print("{:p}".format(graph))
print(graph.__format__("p"))
Parameters
  • name (Optional[str]) – a string in glob syntax that can contain ? and * to match a single or arbitrary characters.

  • ignorecase (bool) – whether to ignroe case. Default: True

  • recursive – whether to get the Nodes in the subgraph. Default: True

Returns

A NodeFilterName.

property inputs

Get the list of input Nodes of this graph.

Return type

List[Node]

Returns

A list of Node.

insert_exprs(expr=None)[source]

Initialize the trace mode and insertion position.

When used within a ‘with’ statement, this will temporary set the trace mode and then restore normal mode when the with statement exits:

with graph.insert_exprs(e): # set the trace mode
    ... # trace function or module
... # inert exprs into graph and resotre normal mode
Parameters

expr (Optional[Expr]) – the expr after which to insert. If None, the insertion position will be automatically set based on the input node.

Returns

A resource manager that will initialize trace mode on __enter__ and restore normal mode on __exit__.

property name

Get the name of this graph.

Return type

str

nodes(recursive=True)[source]

Get the Nodes that constitute this graph.

Parameters

recursive – whether to get the Nodes in the subgraph. Default: True

Returns

A NodeFilter containing all Nodes of this graph.

property outputs

Get the list of output Nodes of this graph.

Return type

List[Node]

Returns

A list of Node.

property qualname

Get the qualname of this graph. The qualname can be used to get the submodule from the traced Module or Module.

Example

import megengine.module as M
import megengine.traced_module as tm
import megengine as mge

class block(M.Module):
    def __init__(self):
        super().__init__()
        self.relu = M.ReLU()

    def forward(self, x):
        return self.relu(x)

class module(M.Module):
    def __init__(self):
        super().__init__()
        self.block = block()

    def forward(self, x):
        x = self.block(x)
        return x

net = module()
traced_net = tm.trace_module(net, mge.Tensor([0.]))

qualname = traced_net.block.graph.qualname  # qualname = "module.block"
qualname = qualname.split(".", 1)[-1]  # qualname = "block"

assert qualname in list(map(lambda x: x[0], net.named_modules()))
assert qualname in list(map(lambda x: x[0], traced_net.named_modules()))
Return type

str

replace_node(repl_dict)[source]

Replace the Nodes in the graph.

Parameters

repl_dict (Dict[Node, Node]) – the map {old_Node: new_Node} that specifies how to replace the Nodes.

reset_outputs(outputs)[source]

Reset the output Nodes of the graph.

Note

This method only supports resetting the output of graphs that do not have a parent graph.

Parameters

outputs – an object which inner element is Node. Support tuple, list dict, etc.

For example, the following code

import megengine.functional as F
import megengine.module as M
import megengine.traced_module as tm

class MyModule(M.Module):
    def forward(self, x):
        x = x + 1
        return x

net = MyModule()

inp = F.zeros(shape = (1, ))
traced_module = tm.trace_module(net, inp)
graph = traced_module.graph
inp_node = graph.inputs[1]
out_node = graph.outputs[0]
graph.reset_outputs((out_node, {"input": inp_node}))
out = traced_module(inp)

Will produce the following InternalGraph and out:

print(graph)
print(out)
MyModule.Graph (self, x) {
        %2:     add_out = x.__add__(1, )
        return add_out, x
}
(Tensor([1.], device=xpux:0), {'input': Tensor([0.], device=xpux:0)})
property top_graph

Get the parent graph of this graph.

Returns

An InternalGraph.