Class GraphLoader

Nested Relationships

Class Documentation

class mgb::serialization::GraphLoader

load graph from megbrain dump file

Each GraphLoader instance can create multiple graphs, but all the created graphs share underlying params (i.e. values for SharedDeviceTensor are shared)

Public Types

using LoadConfig = GraphLoadConfig
using SharedTensorMapEntry = ThinHashMap<MemNode, std::shared_ptr<DeviceTensorND>>

mem_node => tensor_value

using SharedTensorIDMap = std::vector<std::pair<std::string, SharedTensorMapEntry>>

tensor_id => (tensor_name, (mem_node => tensor_value))

Since tensor IDs are guaranteed to be consecutive, a vector is used to implement the map.

Either all tensor names are empty, or they are guaranteed to be distinct non-empty strings at dump time.

using SharedTensorNameMap = std::unordered_map<std::string, const SharedTensorMapEntry*>

tensor_name => SharedTensorMapEntry

Public Functions

~GraphLoader() = default
std::unique_ptr<InputFile> reset_file(std::unique_ptr<InputFile> file = {}) = 0

reset underlying input file from which further load() would read

This method can be used to release the currently owned file to the caller.


original input file that is currently used

  • file: new input file, can be null

LoadResult load(const LoadConfig &config = {}, bool rewind = true) = 0

create a new graph instance; not thread safe


const SharedTensorIDMap &shared_tensor_id_map() const = 0

get mapping from tensor ID to device tensor shared between instances

The shared tensors are usually used as model params in a machine learning context. For each param name, the returned value has a map from a memory node to the first param loaded on that mem node

SharedTensorNameMap shared_tensor_name_map()

helper for constructing SharedTensorNameMap from SharedTensorIDMap

GraphDumpFormat format() const = 0

Public Static Functions

std::unique_ptr<GraphLoader> make(std::unique_ptr<InputFile> file, GraphDumpFormat format = {})
Maybe<GraphDumpFormat> identify_graph_dump_format(InputFile &file)
struct LoadResult

Public Types

using TensorMap = std::unordered_map<std::string, std::shared_ptr<HostTensorND>>

Public Functions

~LoadResult() noexcept

expliit dtor decl to reduce binary size

std::unique_ptr<cg::AsyncExecutable> graph_compile(const ComputingGraph::OutputSpec &outspec)

call graph->compile() but also checks for comp seq rec

graph would be destructed if comp_node_seq_record_level == 2; this method should be called in favor of graph->compile().

Public Members

std::shared_ptr<ComputingGraph> graph
TensorMap tensor_map

name to host tensor used in this graph, usually for input tensors

std::unordered_map<std::string, SymbolVar> output_var_map

name to output var nodes specified during serializing

std::unordered_map<size_t, SymbolVar> output_var_map_id

map from original id to loaded output vars

SymbolVarArray output_var_list

original output vars in the order passed to GraphDumper::dump