megengine.utils package

megengine.utils.comp_graph_tools

megengine.utils.comp_graph_tools.get_dep_vars(var, var_type=None)[source]

Returns tensor.core.megbrain_graph.VarNode of type var_type that input var depands on. If var_type is None, returns all types.

Return type

List[VarNode]

megengine.utils.comp_graph_tools.get_opr_type(opr)[source]

Gets the type of an opr.

Return type

str

megengine.utils.comp_graph_tools.get_oprs_seq(outputs, prune_reshape=False)[source]

Gets oprs in some topological order for a dumped model.

Parameters
  • outputs (List[VarNode]) – model outputs.

  • prune_reshape – whether to prune the useless operators during inference.

Return type

List[OperatorNode]

Returns

opr list with some correct execution order.

megengine.utils.comp_graph_tools.get_owner_opr_inputs(var)[source]

Gets the inputs of owner opr of a variable.

Return type

List[VarNode]

megengine.utils.comp_graph_tools.get_owner_opr_type(var)[source]

Gets the type of owner opr of a variable.

Return type

str

megengine.utils.comp_graph_tools.graph_traversal(outputs)[source]

Helper function to traverse the computing graph and return enough useful information.

Parameters

outputs (VarNode) – model outputs.

Returns

tuple (map_oprs, map_vars, var2oprs, opr2receivers, indegree2opr, opr2indegree) WHERE map_oprs is dict from opr_id to actual opr map_vars is dict from var_id to actual var var2oprs is dict from var to dest oprs along with index opr2receivers is dict from current opr to next opr indegree2opr is dict from in_degree to opr in computing graph opr2indegree is dict from opr in computing graph to in_degree

(indegree2opr, opr2indegree) are only used in topological sort in get_oprs_seq function

megengine.utils.comp_graph_tools.load_and_inference(file, inp_data_list)[source]

Loads a serialized computing graph and run inference with input data.

Parameters
  • file – path or handle of the input file.

  • inp_data_list (List[ndarray]) – list of input data.

Return type

List[ndarray]

Returns

list of inference results.

megengine.utils.comp_graph_tools.replace_oprs(dst, oprmap)[source]

Replaces operators in the graph.

Parameters
  • dst (List[VarNode]) – target vars representing the graph.

  • oprmap (Dict[OperatorNode, OperatorNode]) – the map that specifies how to replace the operators.

Return type

List[VarNode]

Returns

new vars that correspond to dst with all the dependencies replaced.

megengine.utils.comp_graph_tools.replace_vars(dst, varmap)[source]

Replaces vars in the graph.

Parameters
  • dst (VarNode) – target vars representing the graph.

  • varmap (Dict[VarNode, VarNode]) – the map that specifies how to replace the vars.

Return type

List[VarNode]

Returns

new vars that correspond to dst with all the dependencies replaced.

megengine.utils.comp_graph_tools.set_priority_to_id(dest_vars)[source]
For all oprs in the subgraph constructed by dest_vars,

sets its priority to id if its original priority is zero.

Parameters

dest_vars – target vars representing the graph.

megengine.utils.compare_binary_iodump

megengine.utils.compare_binary_iodump.check(v0, v1, name, max_err)[source]
megengine.utils.compare_binary_iodump.main()[source]

megengine.utils.deprecation

megengine.utils.future

class megengine.utils.future.Future(ack=True)[source]

Bases: object

get()[source]
set(value)[source]

megengine.utils.hook

class megengine.utils.hook.HookHandler(source_dict, hook)[source]

Bases: object

hook_num = 0
remove()[source]

megengine.utils.http_download

exception megengine.utils.http_download.HTTPDownloadError[source]

Bases: BaseException

The class that represents http request error.

megengine.utils.http_download.download_from_url(url, dst, http_read_timeout=120)[source]

Downloads file from given url to dst.

Parameters
  • url (str) – source URL.

  • dst (str) – saving path.

  • http_read_timeout – how many seconds to wait for data before giving up.

megengine.utils.max_recursion_limit

class megengine.utils.max_recursion_limit.AlternativeRecursionLimit(new_py_limit)[source]

Bases: object

A reentrant context manager for setting global recursion limits.

megengine.utils.max_recursion_limit.max_recursion_limit()[source]

Sets recursion limit to the max possible value.

megengine.utils.net_stats

megengine.utils.net_stats.count_convNd(module, input, output)[source]
megengine.utils.net_stats.count_deconvNd(module, input, output)[source]
megengine.utils.net_stats.count_linear(module, input, output)[source]
megengine.utils.net_stats.net_stats(model, input_size, bar_length_max=20, log_params=True, log_flops=True)[source]

megengine.utils.persistent_cache

class megengine.utils.persistent_cache.PersistentCacheOnServer[source]

Bases: megengine.core._imperative_rt.utils.PersistentCache

get(self: megengine.core._imperative_rt.utils.PersistentCache, arg0: str, arg1: Blob) → Optional[Blob][source]
classmethod make_user_prefix()[source]
put(self: megengine.core._imperative_rt.utils.PersistentCache, arg0: str, arg1: Blob, arg2: Blob) → None[source]

megengine.utils.plugin

megengine.utils.plugin.load_tensor_binary(fobj)[source]

Load a tensor dumped by the BinaryOprIODump plugin; the actual tensor value dump is implemented by mgb::debug::dump_tensor.

Multiple values can be compared by tools/compare_binary_iodump.py.

Parameters

fobj – file object, or a string that contains the file name.

Returns

tuple (tensor_value, tensor_name).

megengine.utils.profile_analyze

megengine.utils.profile_analyze.main(passed_args=None)[source]

Analyses profile info from profile_analyzer .

Run this file with --help to get more usage.

megengine.utils.profile_analyzer

class megengine.utils.profile_analyzer.NonExistNum[source]

Bases: object

An object that behaves like a number but means a field does not exist; It is always greater than any real number.

class megengine.utils.profile_analyzer.OprProfRst(entry)[source]

Bases: object

Opr profiling result dumped from megengine profiler.

__init__(entry)[source]

Opr profiling initialization, which sets up name, type and id of opr_info.

Parameters

entry (dict) – profiling json exec_graph items.

footprint = None

A mapping from "memory" or "computation" to the actual number of corresponding operations.

opr_info = None

A dict containing operator info: name, id and type.

time_dict = None

A mapping from "host" or "device" to list of profiling results.

update_device_prof_info(dev_time)[source]

Updates device profiling info.

Parameters

dev_time (dict) – device time for single opr, is an attribute of profiling result.

update_footprint(footprint)[source]

Updates opr footprint.

Parameters

footprint (dict) – footprint for single opr, is an attribute of profiling result.

update_host_prof_info(host_time)[source]

Updates host profiling info.

Parameters

host_time (dict) – host time for single opr, is an attribute of profiling result.

class megengine.utils.profile_analyzer.ProfileAnalyzer(obj, opr_filter=<function ProfileAnalyzer.<lambda>>)[source]

Bases: object

__init__(obj, opr_filter=<function ProfileAnalyzer.<lambda>>)[source]

Initializes ProfileAnalyzer.

Parameters
  • obj (dict) – dict dumped from json str.

  • opr_filter (Callable) – function that filter oprs.

select(time_func, opr_filter=<function ProfileAnalyzer.<lambda>>, aggregate=None, aggregate_by=None, sort_by=None, top_k=0)[source]

Select operation.

Parameters
  • time_func (Callable) – time_func provided by user, would apply to every OprProfRst.

  • opr_filter (Callable) – filter satisfied operatiors.

  • aggregate (Optional[Callable]) – function that apply to list of records which are aggregated by atype.

  • aggregate_by (Optional[str]) – the type aggregated by.

  • sort_by (Optional[str]) – keyword for sorting all records.

  • top_k (int) – specify the maximum number of records.

Return type

List[Record]

Returns

the records that go through select, aggregate, sort.

class megengine.utils.profile_analyzer.Record(time, info, footprint)[source]

Bases: object

A record of analyzing result

__init__(time, info, footprint)[source]

Initializes single record.

Parameters
  • time (float) – opr running time, evaluated by applying users providing function to OprProfRst.

  • info (dict) – opr information, could be original opr information or aggregate infomation if aggregating enabled.

  • footprint (dict) – contains footprint information, for now, we have "computation", "memory", "in_shapes", "out_shapes".

__slot__ = ['time', 'info', 'computation', 'memory', 'in_shapes', 'in_layouts', 'out_shapes', 'flops', 'bandwidth', 'opr_id']
get_column_by_name(name=None)[source]

Extracts column value by its column name.

Parameters

name (Optional[str]) – column name, None for time.

class megengine.utils.profile_analyzer.TimeFuncHelper[source]

Bases: object

Time Function Helper for users.

static eval_time_func(prof_type, end_key, func)[source]

Eval oprerator profile time.

Parameters
  • prof_type (str) – ‘host’ or ‘device’.

  • end_key (str) – ‘kern’ or ‘end’.

  • func (Callable) – apply to list of all thread of gpu time.

Return type

float

Returns

eval time results.

static max_end_func(prof_type, end_key, func)[source]

Eval oprerator profile max end time.

Parameters
  • prof_type (str) – ‘host’ or ‘device’.

  • end_key (str) – ‘kern’ or ‘end’.

  • func (Callable) – apply to list of all thread of gpu time.

Return type

float

Returns

eval time results.

static min_start_func(prof_type, end_key, func)[source]

Eval oprerator profile min start time.

Parameters
  • prof_type (str) – ‘host’ or ‘device’.

  • end_key (str) – ‘kern’ or ‘end’.

  • func (Callable) – apply to list of all thread of gpu time.

Return type

float

Returns

eval time results.

megengine.utils.profiler

class megengine.utils.profiler.Profiler(path='profile', *, formats='chrome_timeline', type_filter='.*', exit_dump=True)[source]

Bases: object

Profile graph execution in imperative mode.

Parameters

path (str) – default path prefix for profiler to dump.

Examples:

import megengine as mge
import megengine.module as M
from megengine.utils.profiler import Profiler

# With Learnable Parameters
for iter in range(0, 10):
    # Only profile record of last iter would be saved
    with Profiler("profile"):
        # your code here

# Then open the profile file in chrome timeline window
CHROME_TIMELINE = 'chrome_timeline'
COMPATIBLE = 'compatible'
GRAPHVIZ = 'graphviz'
WITH_FOOTPRINT = 1
dump(path=None)[source]
classmethod fetch_attrs(op)[source]
megengine.utils.profiler.profile

alias of megengine.utils.profiler.Profiler

megengine.utils.tensor_sanity_check

class megengine.utils.tensor_sanity_check.TensorSanityCheck[source]

Bases: object

An object that checks whether the input tensors of each operator have changed before and after the operation.

Examples:

from megengine import tensor
from megengine.utils.tensor_sanity_check import TensorSanityCheck
with TensorSanityCheck() as checker:
    a = tensor([1, 2])
    b = tensor([3, 4])
    c = a + b

megengine.utils.types

megengine.utils.types.get_ndtuple(value, *, n, allow_zero=True)[source]

Converts possibly 1D tuple to nd tuple.

Parameters

allow_zero (bool) – whether to allow zero tuple value.