Skip to main content
Ctrl+K
MegEngine 1.13.2 documentation - Home

Site Navigation

  • Beginner’s guide
  • User’s Guide
  • 开发者指南
  • 论坛
  • 官网
  • GitHub
  • Bilibili

Site Navigation

  • Beginner’s guide
  • User’s Guide
  • 开发者指南
  • 论坛
  • 官网
  • GitHub
  • Bilibili

Section Navigation

  • How to install MegEngine
  • 用户迁移指南
    • MegEngine for NumPy users
    • MegEngine for PyTorch users
  • Summary of common problems
    • Frequently Asked Questions about Video Memory Usage
    • Frequently Asked Questions about Model Reproduction

Model Development (Basics)

  • Deep understanding of Tensor data structure
    • Rank, Axes and Shape attributes
    • Tensor element index
    • Tensor data type
    • The device where the Tensor is located
    • Examples of Tensor visualization
    • Tensor memory layout
  • Use Functional operations and calculations
    • How to create a Tensor
    • How to operate Tensor
    • How to perform scientific calculations with Tenosr
  • Use Data to build the input pipeline
    • Use Dataset to define a data set
    • Use Sampler to define sampling rules
    • Use Transform to define data transformation
    • Use Collator to define a merge strategy
  • Use Module to define the model structure
    • Module base class concept and interface introduction
  • Basic principles and use of Autodiff
    • Autodiff 高阶使用
  • Use Optimizer to optimize parameters
    • Parameter optimization advanced configuration
  • Save and Load Models (S&L)
  • Use Hub to publish and load pre-trained models

Model development (advanced article)

  • Saving memory by recomputing (Recomputation)
    • Memory optimization using DTR
    • 使用 Sublinear 进行显存优化
  • Distributed Training
  • Quantization
    • Explanation of the principle of quantification scheme
  • Automatic mixing accuracy (AMP)
    • 使用 NHWC 格式进一步提速
  • Model performance data generation and analysis (Profiler)
  • 使用 TracedModule 发版
    • 快速上手 TracedModule
    • TracedModule 基本概念
    • TracedModule 接口介绍
    • TracedModule 常见图手术
  • Just-in-time compilation (JIT)
    • Convert dynamic graphs to static graphs (Trace)
    • Export serialized model file (Dump)
    • 使用 XLA 作为编译后端加速模型训练

Reasoning and deployment

  • Model Deployment Overview and Process Recommendations
  • Deploy the model with MegEngine Lite
    • Get a model for MegEngine Lite inference
    • MegEngine Lite C++ 模型部署快速上手
    • MegEngine Lite Python 部署模型快速上手
  • MegEngine Lite 使用接口
    • MegEngine Lite C++ 接口介绍
    • MegEngine Lite python 接口介绍
  • 使用 MegEngine Lite 部署模型进阶
    • 性能优化
      • 输入输出内存拷贝优化
      • 执行性能优化
    • 内存优化
    • 减少 MegEngine Lite 体积
    • 模型加解密和打包(可选)
    • CV 算法示例
  • 使用 Load and run 测试与验证模型
    • Load and run 简单入门
    • Load and run 设置选项列表及说明
    • 使用 Load and run 进行推理优化实验
    • 使用 Load and run 进行模型性能分析
    • 使用 Load and run 进行精度分析
    • 使用 Load and run debug 模型推理
    • Load and run Python 接口
    • 使用 Load and run 自动获取最优推理加速配置参数

Tools and plug-ins

  • Statistics and visualization of parameters and calculations
  • MegEngine 模型可视化
  • RuntimeOpr instructions
  • Custom Op
  • User’s Guide
  • Summary of...

Summary of common problems#

  • Frequently Asked Questions about Video Memory Usage
  • Frequently Asked Questions about Model Reproduction

previous

MegEngine for PyTorch users

next

Frequently Asked Questions about Video Memory Usage

Edit on GitHub
Show Source