Defined in File operator_node.h
properties of an operator
Most of the fields are setup by OperatorNodeBase::do_make_node_prop() and can not be changed later; but attribute() can always be modified.
the opr works on a single comp node
the opr could work on different memory node than its input
not a pure function meaning output is not completely determined by input; also means that multiple evaluation of the same (operator without returning control to user) may produce different results
content of input var would be modified (currently only AddUpdate)
do not allow comp node optimizer to change comp node of output vars of this operator
the operator should not be automatically duplicated (i.e. it may have side effect, even if it is a pure function); automatic duplication can be used in sublinear memory optimizer
this operator has custom implementation of init_output_mem_plan and it may change even if no shape changes. init_output_mem_plan() for those oprs would always be called before each graph execution.
Do not automatically add waiting spec for inputs on output comp nodes. This is useful for utility operators that directly dispatch funcs onto input comp nodes; their outputs are usually a placeholder variable.
Note: the input_waiting_spec() would not be initialized and the output should not be read by oprs on other comp nodes;
type of dependency of one operator on another operator
device value must be computed before starting opr; this is the default dep type for input vars
depends on host value, which must be retrieved from StaticInferManager during runtime; if value could be statically inferred and DEV_COMP_ORDER is not set, it may not be computed on device; note that change of host value would not cause memory reallocation, so oprs whose memory depends on host value but output shape may be unchanged should add HOST_VALUE_DYNOUT
add RT_FORCE_DYNAMIC_MEM_ALLOC flag to output if input in this dependency entry is not const-inferable. HOST_VALUE must also be set.
This is used when output value can be forwarded from one input (e.g. value in IndexAt opr) and other inputs (e.g. index in IndexAt) change frequently. Also note that static memory allocation would not be triggered when no shape changes. So oprs like IndexAt must use dynamic allocation to ensure its output value corresponds to current index value if index can change.
depends on shape, which can be accessed by VarNode::shape during runtime; if shape could be statically inferred and DEV_COMP_ORDER is not set, computing on device may be omitted
only needs to ensure it has been computed; Note that value is not needed so memory could be reclaimed, but shape is always valid
whether empty tensor is allowed for HOST_VALUE or DEV_VALUE dep types; either HOST_VALUE or DEV_VALUE must also be specified
get all dependency needed to produce output
add a flag
test whether a flag has been added
add dependency type to a var; original dependency types would be retained; dest is allowed to not exist in current dep map
like add_dep_type() but requires dest to already exist in dep map
reset dep type; the vars could contain duplicated var nodes, in which case the corresponding dep type would be ORed together
Public Static Functions
whether a dep type require device computation order
whether a dep type require values on device
operator attributs that can be directly modified
topo sort priority: smaller number means higher priority
if this operator is copied from another opr or generated by graph transformation from another opr, then src_opr would be the corresponding source operator
objects associated with this opr; their memory should be managed by some UserData class attached to the computing graph
source operator that creates this opr as its gradient