# Class OperatorNodeProp¶

## Inheritance Relationships¶

### Base Type¶

• public NonCopyableObj

## Class Documentation¶

class mgb::cg::OperatorNodeProp : public NonCopyableObj

properties of an operator

Most of the fields are setup by OperatorNodeBase::do_make_node_prop() and can not be changed later; but attribute() can always be modified.

Public Types

enum Flag

Values:

enumerator SINGLE_COMP_NODE = 1 << 0

the opr works on a single comp node

enumerator CROSS_COMP_NODE_MEMORY = 1 << 1

the opr could work on different memory node than its input

enumerator IMPURE_FUNC = 1 << 2

not a pure function meaning output is not completely determined by input; also means that multiple evaluation of the same (operator without returning control to user) may produce different results

enumerator FORCE_UPDATE_INPUT_VAR = 1 << 3

content of input var would be modified (currently only AddUpdate)

enumerator DISALLOW_COMP_NODE_OPTIMIZE = 1 << 4

do not allow comp node optimizer to change comp node of output vars of this operator

enumerator NO_AUTOMATIC_DUP = 1 << 5

the operator should not be automatically duplicated (i.e. it may have side effect, even if it is a pure function); automatic duplication can be used in sublinear memory optimizer

enumerator IMPURE_OUTPUT_MEM_PLAN = 1 << 6

this operator has custom implementation of init_output_mem_plan and it may change even if no shape changes. init_output_mem_plan() for those oprs would always be called before each graph execution.

enumerator NO_INPUT_WAITING = 1 << 7

Do not automatically add waiting spec for inputs on output comp nodes. This is useful for utility operators that directly dispatch funcs onto input comp nodes; their outputs are usually a placeholder variable.

Note: the input_waiting_spec() would not be initialized and the output should not be read by oprs on other comp nodes;

enum DepType

type of dependency of one operator on another operator

Values:

enumerator DEV_VALUE = 1 << 0

device value must be computed before starting opr; this is the default dep type for input vars

enumerator HOST_VALUE = 1 << 1

depends on host value, which must be retrieved from StaticInferManager during runtime; if value could be statically inferred and DEV_COMP_ORDER is not set, it may not be computed on device; note that change of host value would not cause memory reallocation, so oprs whose memory depends on host value but output shape may be unchanged should add HOST_VALUE_DYNOUT

enumerator HOST_VALUE_DYNOUT = 1 << 2

add RT_FORCE_DYNAMIC_MEM_ALLOC flag to output if input in this dependency entry is not const-inferable. HOST_VALUE must also be set.

This is used when output value can be forwarded from one input (e.g. value in IndexAt opr) and other inputs (e.g. index in IndexAt) change frequently. Also note that static memory allocation would not be triggered when no shape changes. So oprs like IndexAt must use dynamic allocation to ensure its output value corresponds to current index value if index can change.

enumerator SHAPE = 1 << 3

depends on shape, which can be accessed by VarNode::shape during runtime; if shape could be statically inferred and DEV_COMP_ORDER is not set, computing on device may be omitted

enumerator DEV_COMP_ORDER = 1 << 4

only needs to ensure it has been computed; Note that value is not needed so memory could be reclaimed, but shape is always valid

enumerator VALUE_ALLOW_EMPTY = 1 << 5

whether empty tensor is allowed for HOST_VALUE or DEV_VALUE dep types; either HOST_VALUE or DEV_VALUE must also be specified

using DepMap = ThinHashMap<VarNode*, DepType>

Public Functions

const DepMap &dep_map() const

get all dependency needed to produce output

DepMap &dep_map()
OperatorNodeProp &add_flag(Flag flag)

bool contain(Flag req) const

test whether a flag has been added

OperatorNodeProp &add_dep_type(VarNode *dest, DepType type)

add dependency type to a var; original dependency types would be retained; dest is allowed to not exist in current dep map

OperatorNodeProp &add_dep_type_existing_var(VarNode *dest, DepType type)

like add_dep_type() but requires dest to already exist in dep map

void reset_dep_type(const VarNodeArray &vars, const SmallVector<DepType> &dep_types)

reset dep type; the vars could contain duplicated var nodes, in which case the corresponding dep type would be ORed together

Attribute &attribute() const

user-modifiable attribute

Public Static Functions

constexpr bool is_device_comp_order_dep(DepType type)

whether a dep type require device computation order

constexpr bool is_device_value_dep(DepType type)

whether a dep type require values on device

struct Attribute

operator attributs that can be directly modified

Public Members

int priority = 0

topo sort priority: smaller number means higher priority

Accessory accessory
Maybe<GradTracker> grad_tracker
OperatorNodeBase *src_opr = nullptr

if this operator is copied from another opr or generated by graph transformation from another opr, then src_opr would be the corresponding source operator

class Accessory

objects associated with this opr; their memory should be managed by some UserData class attached to the computing graph

struct GradTracker

source operator that creates this opr as its gradient

Public Members

OperatorNodeBase *orig_opr
VarNode *target_var
VarNode *wrt_var