megengine.module¶
>>> import megengine.module as M
See also
关于 Module 的使用案例,请参考 Use Module to define the model structure ;
关于如何进行模型量化以及几类 Module 的转换原理,请参考 Quantization 。
Float Module¶
Containers¶
Base Module class. |
|
A sequential container. |
General operations¶
A |
|
Convolution Layers¶
Applies a 1D convolution over an input tensor. |
|
Applies a 2D convolution over an input tensor. |
|
Applies a 3D convolution over an input tensor. |
|
Applies a 2D transposed convolution over an input tensor. |
|
Applies a 3D transposed convolution over an input tensor. |
|
Applies a spatial convolution with untied kernels over an groupped channeled input 4D tensor. |
|
Deformable Convolution. |
|
Apply a sliding window to input tensor and copy content in the window to corresponding output location. |
|
Opposite opration of SlidingWindow, sum over the sliding windows on the corresponding input location. |
Pooling layers¶
Applies a 2D average pooling over an input. |
|
Applies a 2D max pooling over an input. |
|
Applies a 2D average pooling over an input. |
|
Applies a 2D max adaptive pooling over an input. |
|
Padding layers¶
Pads the input tensor. |
Non-linear Activations¶
Applies the element-wise function: |
|
Applies a softmax function. |
|
Applies the rectified linear unit function element-wise: |
|
Applies the element-wise function: |
|
Applies the element-wise function: |
|
Applies the element-wise function: |
|
Applies the element-wise function: |
Normalization Layers¶
Applies Batch Normalization over a 2D or 3D input. |
|
Applies Batch Normalization over a 4D tensor. |
|
Applies Synchronized Batch Normalization for distributed training. |
|
Applies Group Normalization over a mini-batch of inputs Refer to Group Normalization |
|
Applies Instance Normalization over a mini-batch of inputs Refer to Instance Normalization |
|
Applies Layer Normalization over a mini-batch of inputs Refer to Layer Normalization |
|
Apply local response normalization to the input tensor. |
Recurrent Layers¶
Applies a multi-layer Elman RNN with \(\tanh\) or \(\text{ReLU}\) non-linearity to an input sequence. |
|
An Elman RNN cell with tanh or ReLU non-linearity. |
|
Applies a multi-layer long short-term memory LSTM to an input sequence. |
|
A long short-term memory (LSTM) cell. |
Linear Layers¶
A placeholder identity operator that will ignore any argument. |
|
Applies a linear transformation to the input. |
Dropout Layers¶
Randomly sets some elements of inputs to zeros with the probability \(drop\_prob\) during training. |
Sparse Layers¶
A simple lookup table that stores embeddings of a fixed dictionary and size. |
Vision Layers¶
Rearranges elements in a tensor of shape (, C x r^2, H, W) to a tensor of shape (, C, H x r, W x r), where r is an upscale factor, where * is zero or more batch dimensions. |
Fused operations¶
A fused |
|
A fused |
|
Batched |
Quantization¶
A helper |
|
A helper |
QAT Module¶
Containers¶
Base class of quantized-float related |
Operations¶
A |
|
A fused |
|
A fused |
|
A |
|
A |
|
A helper |
|
A helper |
Quantized Module¶
Base class of quantized |
Operations¶
Quantized version of |
|
Quantized version of |
|
A |
|
Quantized version of |
|
Quantized version of |
|
Quantized version of |
|
Quantized version of |
|
Quantized version of |
|
Quantized version of |
|
Quantized version of |
|
Quantized version of |
External Layers¶
Load a serialized ExternOpr subgraph. |
|
Load a serialized TensorrtRuntime subgraph. |
|
Load a serialized CambriconRuntime subgraph. |
|
Load a serialized AtlasRuntime subgraph. |
|
Load a serialized MagicMindRuntime subgraph. |