megengine.functional package

megengine.functional.debug_param

megengine.functional.debug_param.get_conv_execution_strategy()[source]

Returns the execuation strategy of Conv2d.

See set_conv_execution_strategy() for possible return values

Return type

str

megengine.functional.debug_param.set_conv_execution_strategy(option)[source]

Sets the execuation strategy of Conv2d.

Parameters

option (str) –

Decides how Conv2d algorithm is chosen. Available values:

  • ’HEURISTIC’ uses heuristic to choose the fastest algorithm.

  • ’PROFILE’ runs possible algorithms on real device to find the best one.

  • ’PROFILE_HEURISTIC’ uses profiling result and heuristic to choose the fastest algorithm.

  • ’PROFILE_REPRODUCIBLE’ uses the fastest of profiling result that is also reproducible.

  • ’HEURISTIC_REPRODUCIBLE’ uses heuristic to choose the fastest algorithm that is also reproducible.

The default strategy is ‘HEURISTIC’.

It can also be set through the environment variable ‘MEGENGINE_CONV_EXECUTION_STRATEGY’.

megengine.functional.distributed

megengine.functional.elemwise

megengine.functional.elemwise.abs(x)[source]

Element-wise absolute value.

megengine.functional.elemwise.acos(x)[source]

Element-wise inverse cosine.

megengine.functional.elemwise.acosh(x)[source]

Element-wise inverse hyperbolic cosine.

megengine.functional.elemwise.add(x, y)[source]

Element-wise addition. At least one operand should be tensor.

Same for sub/mul/div/floor_div/pow/mod/atan2/equal/not_equal/less/less_equal/greater/greater_equal/maximum/minmium.

Parameters

x – input tensor.

Returns

computed tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(0, 6, dtype=np.float32).reshape(2, 3))
y = tensor(np.arange(0, 6, dtype=np.float32).reshape(2, 3))
out = F.add(x, y)
print(out.numpy())

Outputs:

[[ 0.  2.  4.]
 [ 6.  8. 10.]]
megengine.functional.elemwise.asin(x)[source]

Element-wise inverse sine.

megengine.functional.elemwise.asinh(x)[source]

Element-wise inverse hyperbolic sine.

megengine.functional.elemwise.atan(x)[source]

Element-wise inverse tangent.

megengine.functional.elemwise.atan2(y, x)[source]

Element-wise 2-argument arctangent.

megengine.functional.elemwise.atanh(x)[source]

Element-wise inverse hyperbolic tangent.

megengine.functional.elemwise.ceil(x)[source]

Element-wise ceiling.

megengine.functional.elemwise.clip(x, lower=None, upper=None)[source]

Clamps all elements in input tensor into the range [ lower, upper ] and returns a resulting tensor:

\[\begin{split}y_i = \begin{cases} \text{lower} & \text{if } x_i < \text{lower} \\ x_i & \text{if } \text{lower} \leq x_i \leq \text{upper} \\ \text{upper} & \text{if } x_i > \text{upper} \end{cases}\end{split}\]
Parameters
  • x (Tensor) – input tensor.

  • lower – lower-bound of the range to be clamped to.

  • upper – upper-bound of the range to be clamped to.

Return type

Tensor

Returns

output clamped tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

a = tensor(np.arange(5).astype(np.int32))
print(F.clip(a, 2, 4).numpy())
print(F.clip(a, lower=3).numpy())
print(F.clip(a, upper=3).numpy())

Outputs:

[2 2 2 3 4]
[3 3 3 3 4]
[0 1 2 3 3]
megengine.functional.elemwise.cos(x)[source]

Element-wise cosine.

Parameters

x – input tensor.

Returns

computed tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(0, 6, dtype=np.float32).reshape(2, 3))
out = F.cos(x)
print(out.numpy().round(decimals=4))

Outputs:

[[ 1.      0.5403 -0.4161]
 [-0.99   -0.6536  0.2837]]
megengine.functional.elemwise.cosh(x)[source]

Element-wise hyperbolic cosine.

megengine.functional.elemwise.div(x, y)[source]

Element-wise (x / y).

megengine.functional.elemwise.equal(x, y)[source]

Element-wise (x == y).

Parameters
  • x – input tensor 1.

  • y – input tensor 2.

Returns

computed tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(0, 6, dtype=np.float32).reshape(2, 3))
y = tensor(np.arange(0, 6, dtype=np.float32).reshape(2, 3))
out = F.equal(x, y)
print(out.numpy())

Outputs:

[[1. 1. 1.]
 [1. 1. 1.]]
megengine.functional.elemwise.exp(x)[source]

Element-wise exponential.

megengine.functional.elemwise.expm1(x)[source]

Element-wise exp(x)-1.

megengine.functional.elemwise.floor(x)[source]

Element-wise floor.

megengine.functional.elemwise.floor_div(x, y)[source]

Element-wise floor(x / y).

megengine.functional.elemwise.greater(x, y)[source]

Element-wise (x > y).

megengine.functional.elemwise.greater_equal(x, y)[source]

Element-wise (x >= y).

megengine.functional.elemwise.hsigmoid(x)[source]

Element-wise relu6(x + 3) / 6.

megengine.functional.elemwise.hswish(x)[source]

Element-wise x * relu6(x + 3) / 6.

Parameters

x – input tensor.

Returns

computed tensor.

Example:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(5).astype(np.float32))
out = F.hswish(x)
print(out.numpy().round(decimals=4))
[0.     0.6667 1.6667 3.     4.    ]
megengine.functional.elemwise.left_shift(x, y)[source]

Element-wise bitwise binary: x << y.

Parameters
  • x – input tensor, should be int.

  • y – how many bits to be left-shifted.

Returns

computed tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(0, 6, dtype=np.int32).reshape(2, 3))
out = F.left_shift(x, 2)
print(out.numpy())

Outputs:

[[ 0  4  8]
 [12 16 20]]
megengine.functional.elemwise.less(x, y)[source]

Element-wise (x < y).

megengine.functional.elemwise.less_equal(x, y)[source]

Element-wise (x <= y).

megengine.functional.elemwise.log(x)[source]

Element-wise logarithm (base e).

megengine.functional.elemwise.log1p(x)[source]

Element-wise log(x+1) (base e).

megengine.functional.elemwise.logical_and(x, y)[source]

Element-wise logical and: x && y.

megengine.functional.elemwise.logical_not(x)[source]

Element-wise logical not: ~x.

megengine.functional.elemwise.logical_or(x, y)[source]

Element-wise logical or: x || y.

megengine.functional.elemwise.logical_xor(x, y)[source]

Element-wise logical xor: x ^ y.

megengine.functional.elemwise.maximum(x, y)[source]

Element-wise maximum of array elements.

megengine.functional.elemwise.minimum(x, y)[source]

Element-wise minimum of array elements.

megengine.functional.elemwise.mod(x, y)[source]

Element-wise remainder of division.

megengine.functional.elemwise.mul(x, y)[source]

Element-wise multiplication.

megengine.functional.elemwise.neg(x)[source]

Element-wise negation.

megengine.functional.elemwise.not_equal(x, y)[source]

Element-wise (x != y).

megengine.functional.elemwise.pow(x, y)[source]

Element-wise power.

megengine.functional.elemwise.relu(x)[source]

Element-wise max(x, 0).

megengine.functional.elemwise.relu6(x)[source]

Element-wise min(max(x, 0), 6).

megengine.functional.elemwise.right_shift(x, y)[source]

Element-wise bitwise binary: x >> y.

megengine.functional.elemwise.round(x)[source]

Element-wise rounding to int.

megengine.functional.elemwise.sigmoid(x)[source]

Element-wise 1 / ( 1 + exp( -x ) ).

megengine.functional.elemwise.sin(x)[source]

Element-wise sine.

megengine.functional.elemwise.sinh(x)[source]

Element-wise hyperbolic sine.

megengine.functional.elemwise.sqrt(x)[source]

Element-wise sqrt. Returns NaN for negative input value.

Parameters

x (Tensor) – input tensor.

Return type

Tensor

Returns

computed tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(0, 6, dtype=np.float32).reshape(2, 3))
out = F.sqrt(x)
print(out.numpy().round(decimals=4))

Outputs:

[[0.     1.     1.4142]
 [1.7321 2.     2.2361]]
megengine.functional.elemwise.square(x)[source]

Returns a new tensor with the square of the elements of input tensor.

Parameters

inp – input tensor.

Return type

Tensor

Returns

computed tensor.

Examples:

import numpy as np
import megengine as mge
import megengine.functional as F

data = mge.tensor(np.arange(0, 6, dtype=np.float32).reshape(2, 3))
out = F.square(data)
print(out.numpy().round(decimals=4))

Outputs:

[[ 0.  1.  4.]
 [ 9. 16. 25.]]
megengine.functional.elemwise.sub(x, y)[source]

Element-wise subtraction.

megengine.functional.elemwise.tan(x)[source]

Element-wise tangent.

megengine.functional.elemwise.tanh(x)[source]

Element-wise hyperbolic tangent.

megengine.functional.inplace

megengine.functional.inplace.apply()

megengine.functional.loss

megengine.functional.loss.binary_cross_entropy(pred, label, with_logits=True)[source]

Computes the binary cross entropy loss (using logits by default).

By default(with_logitis is True), pred is assumed to be logits, class probabilities are given by sigmoid.

Parameters
  • pred (Tensor) – (N, *), where * means any number of additional dimensions.

  • label (Tensor) – (N, *), same shape as the input.

  • with_logits (bool) – bool, whether to apply sigmoid first. Default: True

Return type

Tensor

Returns

loss value.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

pred = tensor(np.array([0, 0], dtype=np.float32).reshape(1, 2))
label = tensor(np.ones((1, 2), dtype=np.float32))
loss = F.nn.binary_cross_entropy(pred, label)
print(loss.numpy().round(decimals=4))

Outputs:

0.6931
megengine.functional.loss.cross_entropy(pred, label, axis=1, with_logits=True, label_smooth=0)[source]

Computes the multi-class cross entropy loss (using logits by default).

By default(with_logitis is True), pred is assumed to be logits, class probabilities are given by softmax.

It has better numerical stability compared with sequential calls to softmax() and cross_entropy().

When using label smoothing, the label distribution is as follows:

\[y^{LS}_{k}=y_{k}\left(1-\alpha\right)+\alpha/K\]

where \(y^{LS}\) and \(y\) are new label distribution and origin label distribution respectively. k is the index of label distribution. \(\alpha\) is label_smooth and \(K\) is the number of classes.

Parameters
  • pred (Tensor) – input tensor representing the predicted probability.

  • label (Tensor) – input tensor representing the classification label.

  • axis (int) – an axis along which softmax will be applied. Default: 1

  • with_logits (bool) – whether to apply softmax first. Default: True

  • label_smooth (float) – a label smoothing of parameter that can re-distribute target distribution. Default: 0

Return type

Tensor

Returns

loss value.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

data_shape = (1, 2)
label_shape = (1, )
pred = tensor(np.array([0, 0], dtype=np.float32).reshape(data_shape))
label = tensor(np.ones(label_shape, dtype=np.int32))
loss = F.nn.cross_entropy(pred, label)
print(loss.numpy().round(decimals=4))

Outputs:

0.6931
megengine.functional.loss.hinge_loss(pred, label, norm='L1')[source]

Caculates the hinge loss which is often used in SVM.

The hinge loss can be described as:

\[loss(x, y) = \frac{1}{N}\sum_i\sum_j(max(0, 1 - x_{ij}*y_{ij}))\]
Parameters
  • pred (Tensor) – input tensor representing the predicted probability, shape is (N, C).

  • label (Tensor) – input tensor representing the binary classification label, shape is (N, C).

  • norm (str) – specify the norm to caculate the loss, should be “L1” or “L2”.

Return type

Tensor

Returns

loss value.

Examples:

from megengine import tensor
import megengine.functional as F

pred = tensor([[0.5, -0.5, 0.1], [-0.6, 0.7, 0.8]], dtype="float32")
label = tensor([[1, -1, -1], [-1, 1, 1]], dtype="float32")
loss = F.nn.hinge_loss(pred, label)
print(loss.numpy())

Outputs:

1.5
megengine.functional.loss.l1_loss(pred, label)[source]

Calculates the mean absolute error (MAE) between each element in the pred \(x\) and label \(y\).

The mean absolute error can be described as:

\[\ell(x,y) = mean\left(L \right)\]

where

\[L = \{l_1,\dots,l_N\}, \quad l_n = \left| x_n - y_n \right|,\]

\(x\) and \(y\) are tensors of arbitrary shapes with a total of \(N\) elements each. \(N\) is the batch size.

Parameters
  • pred (Tensor) – predicted result from model.

  • label (Tensor) – ground truth to compare.

Return type

Tensor

Returns

loss value.

Examples:

import numpy as np
import megengine as mge
import megengine.functional as F

ipt = mge.tensor(np.array([3, 3, 3, 3]).astype(np.float32))
tgt = mge.tensor(np.array([2, 8, 6, 1]).astype(np.float32))
loss = F.nn.l1_loss(ipt, tgt)
print(loss.numpy())

Outputs:

2.75
megengine.functional.loss.square_loss(pred, label)[source]

Calculates the mean squared error (squared L2 norm) between each element in the pred \(x\) and label \(y\).

The mean squared error can be described as:

\[\ell(x, y) = mean\left( L \right)\]

where

\[L = \{l_1,\dots,l_N\}, \quad l_n = \left( x_n - y_n \right)^2,\]

\(x\) and \(y\) are tensors of arbitrary shapes with a total of \(N\) elements each. \(N\) is the batch size.

Parameters
  • pred (Tensor) – predicted result from model.

  • label (Tensor) – ground truth to compare.

Return type

Tensor

Returns

loss value.

Shape:
  • pred: \((N, *)\) where \(*\) means any number of additional dimensions.

  • label: \((N, *)\). Same shape as pred.

Examples:

import numpy as np
import megengine as mge
import megengine.functional as F

ipt = mge.tensor(np.array([3, 3, 3, 3]).astype(np.float32))
tgt = mge.tensor(np.array([2, 8, 6, 1]).astype(np.float32))
loss = F.nn.square_loss(ipt, tgt)
print(loss.numpy())

Outputs:

9.75

megengine.functional.math

megengine.functional.math.argmax(inp, axis=None, keepdims=False)[source]

Returns the indices of the maximum values along given axis. If axis is a list of dimensions, reduce over all of them.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int], None]) – dimension to reduce. If None, all dimensions will be reduced. Default: None

  • keepdims (bool) – whether the output tensor has axis retained or not. Default: False

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(1, 7, dtype=np.int32).reshape(2,3))
out = F.argmax(x)
print(out.numpy())

Outputs:

5
megengine.functional.math.argmin(inp, axis=None, keepdims=False)[source]

Returns the indices of the minimum values along given axis. If axis is a list of dimensions, reduce over all of them.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int], None]) – dimension to reduce. If None, all dimensions will be reduced. Default: None

  • keepdims (bool) – whether the output tensor has axis retained or not. Default: False

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(1, 7, dtype=np.int32).reshape(2,3))
out = F.argmin(x)
print(out.numpy())

Outputs:

0
megengine.functional.math.argsort(inp, descending=False)[source]

Returns the indices that would sort the input tensor.

Parameters
  • inp (Tensor) – input tensor. If it’s 2d, the result would be array of indices show how to sort each row in the input tensor.

  • descending (bool) – sort in descending order, where the largest comes first. Default: False

Return type

Tensor

Returns

indices of int32 indicates how to sort the input.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.array([1,2], dtype=np.float32))
indices = F.argsort(x)
print(indices.numpy())

Outputs:

[0 1]
megengine.functional.math.isinf(inp)[source]

Returns a new tensor representing if each element is Inf or not.

Parameters

inp (Tensor) – input tensor.

Return type

Tensor

Returns

result tensor.

Examples:

from megengine import tensor
import megengine.functional as F

x = tensor([1, float("inf"), 0])
print(F.isinf(x).numpy())

Outputs:

[False  True False]
megengine.functional.math.isnan(inp)[source]

Returns a new tensor representing if each element is NaN or not.

Parameters

inp (Tensor) – input tensor.

Return type

Tensor

Returns

result tensor.

Examples:

from megengine import tensor
import megengine.functional as F

x = tensor([1, float("nan"), 0])
print(F.isnan(x).numpy())

Outputs:

[False  True False]
megengine.functional.math.max(inp, axis=None, keepdims=False)[source]

Returns the max value of the input tensor along given axis. If axis is a list of dimensions, reduce over all of them.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int], None]) – dimension to reduce. If None, all dimensions will be reduced. Default: None

  • keepdims (bool) – whether the output tensor has axis retained or not. Default: False

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(1, 7, dtype=np.int32).reshape(2,3))
out = F.max(x)
print(out.numpy())

Outputs:

6
megengine.functional.math.mean(inp, axis=None, keepdims=False)[source]

Returns the mean value of input tensor along given axis. If axis is a list of dimensions, reduce over all of them.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int], None]) – dimension to reduce. If None, all dimensions will be reduced. Default: None

  • keepdims (bool) – whether the output tensor has axis retained or not. Default: False

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(1, 7, dtype=np.int32).reshape(2, 3))
out = F.mean(x)
print(out.numpy())

Outputs:

3.5
megengine.functional.math.min(inp, axis=None, keepdims=False)[source]

Returns the min value of input tensor along given axis. If axis is a list of dimensions, reduce over all of them.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int], None]) – dimension to reduce. If None, all dimensions will be reduced. Default: None

  • keepdims (bool) – whether the output tensor has axis retained or not. Default: False

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(1, 7, dtype=np.int32).reshape(2,3))
out = F.min(x)
print(out.numpy())

Outputs:

1
megengine.functional.math.norm(inp, ord=None, axis=None, keepdims=False)[source]

Calculates p-norm of input tensor along given axis.

Parameters
  • inp (Tensor) – input tensor.

  • ord (Optional[float]) – power of value applied to inp. Default: 2

  • axis (Optional[int]) – dimension to reduce. If None, input must be a vector. Default: None

  • keepdims – whether the output tensor has axis retained or not. Default: False

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(-3, 3, dtype=np.float32))
out = F.norm(x)
print(out.numpy().round(decimals=4))

Outputs:

4.3589
megengine.functional.math.normalize(inp, ord=None, axis=None, eps=1e-12)[source]

Performs \(L_p\) normalization of input tensor along given axis.

For a tensor of shape \((n_0, ..., n_{dim}, ..., n_k)\), each \(n_{dim}\) -element vector \(v\) along dimension axis is transformed as:

\[v = \frac{v}{\max(\lVert v \rVert_p, \epsilon)}.\]
Parameters
  • inp (Tensor) – input tensor.

  • ord (Optional[float]) – power of value applied to input tensor. Default: 2

  • axis (Optional[int]) – dimension to reduce.If None, input must be a vector. Default: None

  • eps (float) – a small value to avoid division by zero. Default: 1e-12

Return type

Tensor

Returns

normalized output tensor.

megengine.functional.math.prod(inp, axis=None, keepdims=False)[source]

Returns the product of input tensor along given axis. If axis is a list of dimensions, reduce over all of them.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int], None]) – dimension to reduce. If None, all dimensions will be reduced. Default: None

  • keepdims – whether the output tensor has axis retained or not. Default: False

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(1, 7, dtype=np.int32).reshape(2, 3))
out = F.prod(x)
print(out.numpy())

Outputs:

720
megengine.functional.math.sign(inp)[source]

Returns a new tensor representing the sign of each element in input tensor.

Param

input tensor.

Returns

the sign of input tensor.

Examples:

from megengine import tensor
import megengine.functional as F

x = tensor([1, -1, 0])
print(F.sign(x).numpy())

Outputs:

[ 1 -1  0]
megengine.functional.math.sort(inp, descending=False)[source]

Returns sorted tensor and the indices would sort the input tensor.

Parameters
  • inp (Tensor) – input tensor. If it’s 2d, the result would be sorted by row.

  • descending (bool) – sort in descending order, where the largest comes first. Default: False

Return type

Tuple[Tensor, Tensor]

Returns

tuple of two tensors (sorted_tensor, indices_of_int32).

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.array([1,2], dtype=np.float32))
out, indices = F.sort(x)
print(out.numpy())

Outputs:

[1. 2.]
megengine.functional.math.std(inp, axis=None, keepdims=False)[source]

Returns the standard deviation of input tensor along given axis. If axis is a list of dimensions, reduce over all of them.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int], None]) – dimension to reduce. If None, all dimensions will be reduced. Default: None

  • keepdims (bool) – whether the output tensor has axis retained or not. Default: False

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

data = tensor(np.arange(1, 7, dtype=np.float32).reshape(2, 3))
out = F.std(data, axis=1)
print(out.numpy().round(decimals=4))

Outputs:

[0.8165 0.8165]
megengine.functional.math.sum(inp, axis=None, keepdims=False)[source]

Returns the sum of input tensor along given axis. If axis is a list of dimensions, reduce over all of them.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int], None]) – dimension to reduce. If None, all dimensions will be reduced. Default: None

  • keepdims (bool) – whether the output tensor has axis retained or not. Default: False

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(1, 7, dtype=np.int32).reshape(2, 3))
out = F.sum(x)
print(out.numpy())

Outputs:

21
megengine.functional.math.topk(inp, k, descending=False, kth_only=False, no_sort=False)[source]

Selects the ``Top-K``(by default) smallest elements of 2d matrix by row.

Parameters
  • inp (Tensor) – input tensor. If input tensor is 2d, each row will be sorted.

  • k (int) – number of elements needed.

  • descending (bool) – if True, return the largest elements instead. Default: False

  • kth_only (bool) – if True, only the k-th element will be returned. Default: False

  • no_sort (bool) – if True, the returned elements can be unordered. Default: False

Return type

Tuple[Tensor, Tensor]

Returns

tuple of two tensors (topk_tensor, indices_of_int32).

Examples:

import numpy as np
from megengine import tensor
import  megengine.functional as F

x = tensor(np.array([2, 4, 6, 8, 7, 5, 3, 1], dtype=np.float32))
top, indices = F.topk(x, 5)
print(top.numpy(), indices.numpy())

Outputs:

[1. 2. 3. 4. 5.] [7 0 6 1 5]
megengine.functional.math.var(inp, axis=None, keepdims=False)[source]

Returns the variance value of input tensor along given axis. If axis is a list of dimensions, reduce over all of them.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int], None]) – dimension to reduce. If None, all dimensions will be reduced. Default: None

  • keepdims (bool) – whether the output tensor has axis retained or not. Default: False

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

data = tensor(np.arange(1, 7, dtype=np.float32).reshape(2, 3))
out = F.var(data)
print(out.numpy().round(decimals=4))

Outputs:

2.9167

megengine.functional.nn

megengine.functional.nn.conv_bias_activation(inp, weight, bias, dtype=None, stride=1, padding=0, dilation=1, groups=1, nonlinear_mode='IDENTITY', conv_mode='CROSS_CORRELATION', compute_mode='DEFAULT')[source]

Convolution bias with activation operation, only for inference.

Parameters
  • inp (Tensor) – feature map of the convolution operation.

  • weight (Tensor) – convolution kernel.

  • bias (Tensor) – bias added to the result of convolution

  • stride (Union[int, Tuple[int, int]]) – stride of the 2D convolution operation. Default: 1

  • padding (Union[int, Tuple[int, int]]) – size of the paddings added to the input on both sides of its spatial dimensions. Only zero-padding is supported. Default: 0

  • dilation (Union[int, Tuple[int, int]]) – dilation of the 2D convolution operation. Default: 1

  • groups (int) – number of groups into which the input and output channels are divided, so as to perform a “grouped convolution”. When groups is not 1, in_channels and out_channels must be divisible by groups, and the shape of weight should be (groups, out_channel // groups, in_channels // groups, height, width).

  • conv_mode (string or Convolution.Mode.) – supports ‘CROSS_CORRELATION’ or ‘CONVOLUTION’. Default: ‘CROSS_CORRELATION’

  • dtype – support for np.dtype, Default: np.int8

  • compute_mode (string or Convolution.ComputeMode.) – when set to “DEFAULT”, no special requirements will be placed on the precision of intermediate results. When set to “FLOAT32”, “Float32” would be used for accumulator and intermediate result, but only effective when input and output are of Float16 dtype.

Return type

Tensor

megengine.functional.nn.embedding(inp, weight, padding_idx=None, max_norm=None, norm_type=None)[source]

Applies lookup table for embedding.

Parameters
  • inp (Tensor) – tensor with indices.

  • weight (Tensor) – learnable weights which embeds from.

  • padding_idx (Optional[int]) – should be set to None, not supported now.

  • max_norm (Optional[float]) – should be set to None, not supported now.

  • norm_type (Optional[float]) – should be set to None, not supported now.

Returns

output tensor.

Refer to Embedding for more information.

megengine.functional.nn.interpolate(inp, size=None, scale_factor=None, mode='BILINEAR', align_corners=None)[source]

Down/up samples the input tensor to either the given size or with the given scale_factor. size can not coexist with scale_factor.

Parameters
  • inp (Tensor) – input tensor.

  • size (Union[int, Tuple[int, int], None]) – size of the output tensor. Default: None

  • scale_factor (Union[float, Tuple[float, float], None]) – scaling factor of the output tensor. Default: None

  • mode (str) – interpolation methods, acceptable values are: “BILINEAR”, “LINEAR”. Default: “BILINEAR”

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(1, 5, dtype=np.float32).reshape(1, 1, 2, 2))
out = F.nn.interpolate(x, [4, 4], align_corners=False)
print(out.numpy())
out2 = F.nn.interpolate(x, scale_factor=2.)
np.testing.assert_allclose(out.numpy(), out2.numpy())

Outputs:

[[[[1.   1.25 1.75 2.  ]
   [1.5  1.75 2.25 2.5 ]
   [2.5  2.75 3.25 3.5 ]
   [3.   3.25 3.75 4.  ]]]]
megengine.functional.nn.linear(inp, weight, bias=None)[source]

Applies a linear transformation to the input tensor.

Refer to Linear for more information.

Parameters
  • inp (Tensor) – input tensor with shape (N, in_features).

  • weight (Tensor) – weight with shape (out_features, in_features).

  • bias (Optional[Tensor]) – bias with shape (out_features,). Default: None

Return type

Tensor

megengine.functional.nn.nms(boxes, scores, iou_thresh, max_output=None)[source]

Performs non-maximum suppression (NMS) on the boxes according to their intersection-over-union(IoU).

Parameters
  • boxes (Tensor) – tensor of shape (N, 4); the boxes to perform nms on; each box is expected to be in (x1, y1, x2, y2) format.

  • iou_thresh (float) – IoU threshold for overlapping.

  • scores (Tensor) – tensor of shape (N,), the score of boxes.

  • max_output (Optional[int]) – the maximum number of boxes to keep; it is optional if this operator is not traced otherwise it required to be specified; if it is not specified, all boxes are kept.

Return type

Tensor

Returns

indices of the elements that have been kept by NMS.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = np.zeros((100,4))
np.random.seed(42)
x[:,:2] = np.random.rand(100,2)*20
x[:,2:] = np.random.rand(100,2)*20 + 100
scores = tensor(np.random.rand(100))
inp = tensor(x)
result = F.nn.nms(inp, scores, iou_thresh=0.7)
print(result.numpy())

Outputs:

[75 69]
megengine.functional.nn.roi_align(inp, rois, output_shape, mode='average', spatial_scale=1.0, sample_points=2, aligned=True)[source]

Applies roi align on input feature.

Parameters
  • inp (Tensor) – tensor that represents the input feature, shape is (N, C, H, W).

  • rois (Tensor) – (N, 5) boxes. First column is the box index. The other 4 columns are xyxy.

  • output_shape (Union[int, tuple, list]) – (height, width) shape of output rois feature.

  • mode (str) – “max” or “average”, use max/average align just like max/average pooling. Default: “average”

  • spatial_scale (float) – scale the input boxes by this number. Default: 1.0

  • sample_points (Union[int, tuple, list]) – number of inputs samples to take for each output sample. 0 to take samples densely. Default: 2

  • aligned (bool) – wheather to align the input feature, with aligned=True, we first appropriately scale the ROI and then shift it by -0.5. Default: True

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

np.random.seed(42)
inp = tensor(np.random.randn(1, 1, 128, 128))
rois = tensor(np.random.random((4, 5)))
y = F.nn.roi_align(inp, rois, (2, 2))
print(y.numpy()[0].round(decimals=4))

Outputs:

[[[0.175  0.175 ]
  [0.1359 0.1359]]]
megengine.functional.nn.roi_pooling(inp, rois, output_shape, mode='max', scale=1.0)[source]

Applies roi pooling on input feature.

Parameters
  • inp (Tensor) – tensor that represents the input feature, (N, C, H, W) images.

  • rois (Tensor) – (K, 5) boxes. First column is the index into N. The other 4 columns are xyxy.

  • output_shape (Union[int, tuple, list]) – (height, width) of output rois feature.

  • mode (str) – “max” or “average”, use max/average align just like max/average pooling. Default: “max”

  • scale (float) – scale the input boxes by this number. Default: 1.0

Return type

Tensor

Returns

(K, C, output_shape[0], output_shape[1]) feature of rois.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

np.random.seed(42)
inp = tensor(np.random.randn(1, 1, 128, 128))
rois = tensor(np.random.random((4, 5)))
y = F.nn.roi_pooling(inp, rois, (2, 2))
print(y.numpy()[0].round(decimals=4))

Outputs:

[[[-0.1383 -0.1383]
  [-0.5035 -0.5035]]]
megengine.functional.nn.sync_batch_norm(inp, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.9, eps=1e-05, eps_mode='ADDITIVE', group=<megengine.distributed.group.Group object>)[source]

Applies synchronized batch normalization to the input.

Refer to BatchNorm2d and BatchNorm1d for more information.

Parameters
  • inp (Tensor) – input tensor.

  • running_mean (Tensor) – tensor to store running mean.

  • running_var (Tensor) – tensor to store running variance.

  • weight (Optional[Tensor]) – scaling tensor in the learnable affine parameters. See \(\gamma\) in BatchNorm2d.

  • bias (Optional[Tensor]) – bias tensor in the learnable affine parameters. See \(\beta\) in BatchNorm2d.

  • training (bool) – a boolean value to indicate whether batch norm is performed in traning mode. Default: False

  • momentum (Union[float, Tensor]) – value used for the running_mean and running_var computation. Default: 0.9

  • eps (float) – a value added to the denominator for numerical stability. Default: 1e-5

Return type

Tensor

Returns

output tensor.

megengine.functional.nn.adaptive_avg_pool2d(inp, oshp)[source]

Applies a 2D average adaptive pooling over an input.

Refer to AvgAdaptivePool2d for more information.

Parameters
  • inp (Tensor) – input tensor.

  • oshp (Union[Tuple[int, int], int, Tensor]) – (OH, OW) size of the output shape.

Return type

Tensor

Returns

output tensor.

megengine.functional.nn.adaptive_max_pool2d(inp, oshp)[source]

Applies a 2D max adaptive pooling over an input.

Refer to MaxAdaptivePool2d for more information.

Parameters
  • inp (Tensor) – input tensor.

  • oshp (Union[Tuple[int, int], int, Tensor]) – (OH, OW) size of the output shape.

Return type

Tensor

Returns

output tensor.

megengine.functional.nn.avg_pool2d(inp, kernel_size, stride=None, padding=0, mode='AVERAGE_COUNT_EXCLUDE_PADDING')[source]

Applies 2D average pooling over an input tensor.

Refer to AvgPool2d for more information.

Parameters
  • inp (Tensor) – input tensor.

  • kernel_size (Union[int, Tuple[int, int]]) – size of the window.

  • stride (Union[int, Tuple[int, int], None]) – stride of the window. If not provided, its value is set to kernel_size. Default: None

  • padding (Union[int, Tuple[int, int]]) – implicit zero padding added on both sides. Default: 0

  • mode (str) – whether to count padding values. Default: “AVERAGE_COUNT_EXCLUDE_PADDING”

Return type

Tensor

Returns

output tensor.

megengine.functional.nn.batch_norm(inp, running_mean=None, running_var=None, weight=None, bias=None, *, training=False, momentum=0.9, eps=1e-05, inplace=True)[source]

Applies batch normalization to the input.

Refer to BatchNorm2d and BatchNorm1d for more information.

Parameters
  • inp (Tensor) – input tensor.

  • running_mean (Optional[Tensor]) – tensor to store running mean.

  • running_var (Optional[Tensor]) – tensor to store running variance.

  • weight (Optional[Tensor]) – scaling tensor in the learnable affine parameters. See \(\gamma\) in BatchNorm2d.

  • bias (Optional[Tensor]) – bias tensor in the learnable affine parameters. See \(\beta\) in BatchNorm2d.

  • training (bool) – a boolean value to indicate whether batch norm is performed in training mode. Default: False

  • momentum (float) – value used for the running_mean and running_var computation. Default: 0.9

  • eps (float) – a value added to the denominator for numerical stability. Default: 1e-5

  • inplace (bool) – whether to update running_mean and running_var inplace or return new tensors Default: True

Returns

output tensor.

megengine.functional.nn.conv1d(inp, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, conv_mode='CROSS_CORRELATION', compute_mode='DEFAULT')[source]

1D convolution operation.

Refer to Conv1d for more information.

Parameters
  • inp (Tensor) – The feature map of the convolution operation

  • weight (Tensor) – The convolution kernel

  • bias (Optional[Tensor]) – The bias added to the result of convolution (if given)

  • stride (int) – Stride of the 1D convolution operation. Default: 1

  • padding (int) – Size of the paddings added to the input on both sides of its spatial dimensions. Only zero-padding is supported. Default: 0

  • dilation (int) – Dilation of the 1D convolution operation. Default: 1

  • groups (int) – number of groups to divide input and output channels into, so as to perform a “grouped convolution”. When groups is not 1, in_channels and out_channels must be divisible by groups, and the shape of weight should be (groups, out_channel // groups, in_channels // groups, height, width).

  • conv_mode (string or mgb.opr_param_defs.Convolution.Mode) – Supports ‘CROSS_CORRELATION’. Default: ‘CROSS_CORRELATION’.

  • compute_mode (string or mgb.opr_param_defs.Convolution.ComputeMode) – When set to ‘DEFAULT’, no special requirements will be placed on the precision of intermediate results. When set to ‘FLOAT32’, Float32 would be used for accumulator and intermediate result, but only effective when input and output are of Float16 dtype.

Return type

Tensor

megengine.functional.nn.conv2d(inp, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, conv_mode='CROSS_CORRELATION', compute_mode='DEFAULT')[source]

2D convolution operation.

Refer to Conv2d for more information.

Parameters
  • inp (Tensor) – feature map of the convolution operation.

  • weight (Tensor) – convolution kernel.

  • bias (Optional[Tensor]) – bias added to the result of convolution (if given).

  • stride (Union[int, Tuple[int, int]]) – stride of the 2D convolution operation. Default: 1

  • padding (Union[int, Tuple[int, int]]) – size of the paddings added to the input on both sides of its spatial dimensions. Only zero-padding is supported. Default: 0

  • dilation (Union[int, Tuple[int, int]]) – dilation of the 2D convolution operation. Default: 1

  • groups (int) – number of groups into which the input and output channels are divided, so as to perform a grouped convolution. When groups is not 1, in_channels and out_channels must be divisible by groups, and the shape of weight should be (groups, out_channel // groups, in_channels // groups, height, width).

  • conv_mode (string or Convolution.Mode) – supports “CROSS_CORRELATION”. Default: “CROSS_CORRELATION”

  • compute_mode (string or Convolution.ComputeMode) – when set to “DEFAULT”, no special requirements will be placed on the precision of intermediate results. When set to “FLOAT32”, “Float32” would be used for accumulator and intermediate result, but only effective when input and output are of Float16 dtype.

Return type

Tensor

Returns

output tensor.

megengine.functional.nn.conv_transpose2d(inp, weight, bias=None, stride=1, padding=0, dilation=1, groups=1, conv_mode='CROSS_CORRELATION', compute_mode='DEFAULT')[source]

2D transposed convolution operation.

Refer to ConvTranspose2d for more information.

Parameters
  • inp (Tensor) – feature map of the convolution operation.

  • weight (Tensor) – convolution kernel.

  • bias (Optional[Tensor]) – bias added to the result of convolution (if given).

  • stride (Union[int, Tuple[int, int]]) – stride of the 2D convolution operation. Default: 1

  • padding (Union[int, Tuple[int, int]]) – size of the paddings added to the input on both sides of its spatial dimensions. Only zero-padding is supported. Default: 0

  • dilation (Union[int, Tuple[int, int]]) – dilation of the 2D convolution operation. Default: 1

  • groups (int) – number of groups into which the input and output channels are divided, so as to perform a grouped convolution. When groups is not 1, in_channels and out_channels must be divisible by groups, and the shape of weight should be (groups, out_channel // groups, in_channels // groups, height, width). Default: 1

  • conv_mode (string or Convolution.Mode) – supports “CROSS_CORRELATION”. Default: “CROSS_CORRELATION”

  • compute_mode (string or Convolution.ComputeMode) – when set to “DEFAULT”, no special requirements will be placed on the precision of intermediate results. When set to “FLOAT32”, “Float32” would be used for accumulator and intermediate result, but only effective when input and output are of Float16 dtype.

Return type

Tensor

Returns

output tensor.

megengine.functional.nn.dot(inp1, inp2)[source]

Computes dot-product of two vectors inp1 and inp2. inputs must be 1-dimensional or scalar. A scalar input is automatically broadcasted. Refer to matmul() for more general usage.

Parameters
  • inp1 (Tensor) – first vector.

  • inp2 (Tensor) – second vector.

Return type

Tensor

Returns

output value.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

data1 = tensor(np.arange(0, 6, dtype=np.float32))
data2 = tensor(np.arange(0, 6, dtype=np.float32))
out = F.dot(data1, data2)
print(out.numpy())

Outputs:

55.
megengine.functional.nn.dropout(inp, drop_prob, training=True)[source]

Returns a new tensor where each of the elements are randomly set to zero with probability P = drop_prob. Optionally rescale the output tensor if training is True.

Parameters
  • inp (Tensor) – input tensor.

  • drop_prob (float) – probability to drop (set to zero) a single element.

  • training (bool) – the default behavior of dropout during training is to rescale the output, then it can be replaced by an Identity during inference. Default: True

Return type

Tensor

Returns

the output tensor

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.ones(10, dtype=np.float32))
out = F.dropout(x, 1./3.)
print(out.numpy())

Outputs:

[1.5 1.5 0.  1.5 1.5 1.5 1.5 1.5 1.5 1.5]
megengine.functional.nn.indexing_one_hot(src, index, axis=1, keepdims=False)[source]

One-hot indexing for some axes.

Parameters
  • src (Tensor) – input tensor.

  • index (Tensor) – index tensor.

  • axis (int) – axis on src for which values in index index. Default: 1

  • keepdims – whether not to remove the axis in result. Default: False

Return type

Tensor

Returns

output tensor.

Examples:

import megengine.functional as F
from megengine import tensor

src = tensor([[1.0, 2.0]])
index = tensor([0])
val = F.indexing_one_hot(src, index)
print(val.numpy())

Outputs:

[1.]
megengine.functional.nn.leaky_relu(inp, negative_slope=0.01)[source]

Applies the element-wise leaky_relu function

Refer to LeakyReLU for more information.

Return type

Tensor

megengine.functional.nn.local_conv2d(inp, weight, bias=None, stride=1, padding=0, dilation=1, conv_mode='CROSS_CORRELATION')[source]

Applies spatial 2D convolution over an groupped channeled image with untied kernels.

megengine.functional.nn.logsigmoid(inp)[source]

Applies the element-wise function:

\[\text{logsigmoid}(x) = \log(\frac{ 1 }{ 1 + \exp(-x)}) = \log(1/(1 + \exp(-x))) = - \log(1 + \exp(-x)) = - \text{softplus}(-x)\]
Parameters

inp (Tensor) – input tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(-5, 5, dtype=np.float32))
y = F.logsigmoid(x)
print(y.numpy().round(decimals=4))

Outputs:

[-5.0067 -4.0182 -3.0486 -2.1269 -1.3133 -0.6931 -0.3133 -0.1269 -0.0486
 -0.0181]
Return type

Tensor

megengine.functional.nn.logsoftmax(inp, axis)[source]

Applies the \(\log(\text{softmax}(x))\) function to an n-dimensional input tensor. The \(\text{logsoftmax}(x)\) formulation can be simplified as:

\[\text{logsoftmax}(x_{i}) = \log(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} )\]

For numerical stability the implementation follows this transformation:

\[\text{logsoftmax}(x) = \log (\frac{\exp (x)}{\sum_{i}(\exp (x_{i}))}) = x - \log (\sum_{i}(\exp (x_{i}))) = x - \text{logsumexp}(x)\]
Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int]]) – axis along which \(\text{logsoftmax}(x)\) will be applied.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(-5, 5, dtype=np.float32)).reshape(2,5)
y = F.logsoftmax(x, axis=1)
print(y.numpy().round(decimals=4))

Outputs:

[[-4.4519 -3.4519 -2.4519 -1.4519 -0.4519]
 [-4.4519 -3.4519 -2.4519 -1.4519 -0.4519]]
Return type

Tensor

megengine.functional.nn.logsumexp(inp, axis, keepdims=False)[source]

Calculates the logarithm of the inputs’ exponential sum along the given axis.

\[\text{logsumexp}(x)= \log \sum_{j=1}^{n} \exp \left(x_{j}\right)\]

For numerical stability, the implementation follows this transformation:

\[\text{logsumexp}(x)= \log \sum_{j=1}^{n} \exp \left(x_{j}\right) = \text{logsumexp}(x)=b+\log \sum_{j=1}^{n} \exp \left(x_{j}-b\right)\]

where

\[b = \max(x_j)\]
Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int]]) – axis over which the sum is taken. It could be single axis or list of axes.

  • keepdims (bool) – whether to retain axis or not for the output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(-5, 5, dtype=np.float32)).reshape(2,5)
y = F.logsumexp(x, axis=1, keepdims=False)
print(y.numpy().round(decimals=4))

Outputs:

[-0.5481  4.4519]
Return type

Tensor

megengine.functional.nn.matmul(inp1, inp2, transpose_a=False, transpose_b=False, compute_mode='DEFAULT', format='DEFAULT')[source]

Performs a matrix multiplication of the matrices inp1 and inp2.

With different inputs dim, this function behaves differently:

  • Both 1-D tensor, simply forward to dot.

  • Both 2-D tensor, normal matrix multiplication.

  • If one input tensor is 1-D, matrix vector multiplication.

  • If at least one tensor are 3-dimensional or >3-dimensional, the other tensor should have dim >= 2, the batched matrix-matrix is returned, and the tensor with smaller dimension will be broadcasted. For example:

    • inp1: (n, k, m), inp2: (n, m, p), return: (n, k, p)

    • inp1: (n, k, m), inp2: (m, p), return: (n, k, p)

    • inp1: (n, j, k, m), inp2: (n, j, m, p), return: (n, j, k, p)

Parameters
  • inp1 (Tensor) – first matrix to be multiplied.

  • inp2 (Tensor) – second matrix to be multiplied.

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

data1 = tensor(np.arange(0, 6, dtype=np.float32).reshape(2, 3))
data2 = tensor(np.arange(0, 6, dtype=np.float32).reshape(3, 2))
out = F.matmul(data1, data2)
print(out.numpy())

Outputs:

[[10. 13.]
 [28. 40.]]
megengine.functional.nn.max_pool2d(inp, kernel_size, stride=None, padding=0)[source]

Applies a 2D max pooling over an input tensor.

Refer to MaxPool2d for more information.

Parameters
  • inp (Tensor) – input tensor.

  • kernel_size (Union[int, Tuple[int, int]]) – size of the window.

  • stride (Union[int, Tuple[int, int], None]) – stride of the window. If not provided, its value is set to kernel_size. Default: None

  • padding (Union[int, Tuple[int, int]]) – implicit zero padding added on both sides. Default: 0

Return type

Tensor

Returns

output tensor.

megengine.functional.nn.one_hot(inp, num_classes)[source]

Performs one-hot encoding for the input tensor.

Parameters
  • inp (Tensor) – input tensor.

  • num_classes (int) – number of classes denotes the last dimension of the output tensor.

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(1, 4, dtype=np.int32))
out = F.one_hot(x, num_classes=4)
print(out.numpy())

Outputs:

[[0 1 0 0]
 [0 0 1 0]
 [0 0 0 1]]
megengine.functional.nn.prelu(inp, weight)[source]

Applies the element-wise PReLU function.

Refer to PReLU for more information.

Return type

Tensor

megengine.functional.nn.remap(inp, map_xy, border_mode='REPLICATE', scalar=0.0, interp_mode='LINEAR')[source]

Applies remap transformation to batched 2D images.

The input images are transformed to the output images by the tensor map_xy. The output’s H and W are same as map_xy’s H and W.

Parameters
  • inp (Tensor) – input image

  • map_xy (Tensor) – (batch, oh, ow, 2) transformation matrix

  • border_mode (str) – pixel extrapolation method. Default: “REPLICATE”. Currently also support “CONSTANT”, “REFLECT”, “REFLECT_101”, “WRAP”.

  • scalar (float) – value used in case of a constant border. Default: 0

  • interp_mode (str) – interpolation methods. Default: “LINEAR”. Currently only support “LINEAR” mode.

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F
inp_shape = (1, 1, 4, 4)
inp = tensor(np.arange(16, dtype=np.float32).reshape(inp_shape))
map_xy_shape = (1, 2, 2, 2)
map_xy = tensor(np.array([[[1., 0.],[0., 1.]],
                    [[0., 1.],[0., 1.]]],
                     dtype=np.float32).reshape(map_xy_shape))
out = F.remap(inp, map_xy)
print(out.numpy())

Outputs:

[[[[1. 4.]
   [4. 4.]]]]
megengine.functional.nn.softmax(inp, axis=None)[source]

Applies a \(\text{softmax}(x)\) function. \(\text{softmax}(x)\) is defined as:

\[\text{softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}\]

It is applied to all elements along axis, and rescales elements so that they stay in the range [0, 1] and sum to 1.

See Softmax for more details.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Optional[int]) – an axis along which \(\text{softmax}(x)\) will be applied. By default, \(\text{softmax}(x)\) will apply along the highest ranked axis.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(-5, 5, dtype=np.float32)).reshape(2,5)
out = F.softmax(x)
print(out.numpy().round(decimals=4))

Outputs:

[[0.0117 0.0317 0.0861 0.2341 0.6364]
 [0.0117 0.0317 0.0861 0.2341 0.6364]]
Return type

Tensor

megengine.functional.nn.softplus(inp)[source]

Applies the element-wise function:

\[\text{softplus}(x) = \log(1 + \exp(x))\]

softplus is a smooth approximation to the ReLU function and can be used to constrain the output to be always positive. For numerical stability the implementation follows this transformation:

\[\text{softplus}(x) = \log(1 + \exp(x)) = \log(1 + \exp(-\text{abs}(x))) + \max(x, 0) = \log1p(\exp(-\text{abs}(x))) + \text{relu}(x)\]
Parameters

inp (Tensor) – input tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(-3, 3, dtype=np.float32))
y = F.softplus(x)
print(y.numpy().round(decimals=4))

Outputs:

[0.0486 0.1269 0.3133 0.6931 1.3133 2.1269]
Return type

Tensor

megengine.functional.nn.svd(inp, full_matrices=False, compute_uv=True)[source]

Computes the singular value decompositions of input matrix.

Parameters

inp (Tensor) – input matrix, must has shape […, M, N].

Return type

Tensor

Returns

output matrices, (U, sigma, V).

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.arange(0, 6, dtype=np.float32).reshape(2,3))
_, y, _ = F.svd(x)
print(y.numpy().round(decimals=3))

Outputs:

[7.348 1.   ]
megengine.functional.nn.warp_perspective(inp, M, dsize, border_mode='REPLICATE', border_val=0.0, interp_mode='LINEAR')[source]

Applies perspective transformation to batched 2D images.

The input images are transformed to the output images by the transformation matrix:

\[\text{output}(n, c, h, w) = \text{input} \left( n, c, \frac{M_{00}h + M_{01}w + M_{02}}{M_{20}h + M_{21}w + M_{22}}, \frac{M_{10}h + M_{11}w + M_{12}}{M_{20}h + M_{21}w + M_{22}} \right)\]
Parameters
  • inp (Tensor) – input image.

  • M (Tensor) – (batch, 3, 3) transformation matrix.

  • dsize (Union[Tuple[int, int], int, Tensor]) – (h, w) size of the output image.

  • border_mode (str) – pixel extrapolation method. Default: “REPLICATE”. Currently also support “CONSTANT”, “REFLECT”, “REFLECT_101”, “WRAP”.

  • border_val (float) – value used in case of a constant border. Default: 0

  • interp_mode (str) – interpolation methods. Default: “LINEAR”. Currently only support “LINEAR” mode.

Return type

Tensor

Returns

output tensor.

Note:

The transformation matrix is the inverse of that used by cv2.warpPerspective.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

inp_shape = (1, 1, 4, 4)
x = tensor(np.arange(16, dtype=np.float32).reshape(inp_shape))
M_shape = (1, 3, 3)
# M defines a translation: dst(1, 1, h, w) = rst(1, 1, h+1, w+1)
M = tensor(np.array([[1., 0., 1.],
                     [0., 1., 1.],
                     [0., 0., 1.]], dtype=np.float32).reshape(M_shape))
out = F.warp_perspective(x, M, (2, 2))
print(out.numpy())

Outputs:

[[[[ 5.  6.]
   [ 9. 10.]]]]

megengine.functional.quantized

megengine.functional.quantized.apply()
megengine.functional.quantized.batch_conv_bias_activation(inp, weight, bias, dtype=None, stride=1, padding=0, dilation=1, groups=1, nonlinear_mode='IDENTITY', conv_mode='CROSS_CORRELATION', compute_mode='DEFAULT')[source]

Batch convolution bias with activation operation, only for inference.

Parameters
  • inp (Tensor) – feature map of the convolution operation.

  • weight (Tensor) – convolution kernel in batched way.

  • bias (Tensor) – bias added to the result of convolution

  • stride (Union[int, Tuple[int, int]]) – stride of the 2D convolution operation. Default: 1

  • padding (Union[int, Tuple[int, int]]) – size of the paddings added to the input on both sides of its spatial dimensions. Only zero-padding is supported. Default: 0

  • dilation (Union[int, Tuple[int, int]]) – dilation of the 2D convolution operation. Default: 1

  • groups (int) – number of groups into which the input and output channels are divided, so as to perform a “grouped convolution”. When groups is not 1, in_channels and out_channels must be divisible by groups, and the shape of weight should be (groups, out_channel // groups, in_channels // groups, height, width).

  • conv_mode (string or Convolution.Mode.) – supports ‘CROSS_CORRELATION’ or ‘CONVOLUTION’. Default: ‘CROSS_CORRELATION’

  • dtype – support for np.dtype, Default: np.int8

  • compute_mode (string or Convolution.ComputeMode.) – when set to “DEFAULT”, no special requirements will be placed on the precision of intermediate results. When set to “FLOAT32”, “Float32” would be used for accumulator and intermediate result, but only effective when input and output are of Float16 dtype.

Return type

Tensor

megengine.functional.quantized.conv_bias_activation(inp, weight, bias, dtype=None, stride=1, padding=0, dilation=1, groups=1, nonlinear_mode='IDENTITY', conv_mode='CROSS_CORRELATION', compute_mode='DEFAULT')[source]

Convolution bias with activation operation, only for inference.

Parameters
  • inp (Tensor) – feature map of the convolution operation.

  • weight (Tensor) – convolution kernel.

  • bias (Tensor) – bias added to the result of convolution

  • stride (Union[int, Tuple[int, int]]) – stride of the 2D convolution operation. Default: 1

  • padding (Union[int, Tuple[int, int]]) – size of the paddings added to the input on both sides of its spatial dimensions. Only zero-padding is supported. Default: 0

  • dilation (Union[int, Tuple[int, int]]) – dilation of the 2D convolution operation. Default: 1

  • groups (int) – number of groups into which the input and output channels are divided, so as to perform a “grouped convolution”. When groups is not 1, in_channels and out_channels must be divisible by groups, and the shape of weight should be (groups, out_channel // groups, in_channels // groups, height, width).

  • conv_mode (string or Convolution.Mode.) – supports ‘CROSS_CORRELATION’ or ‘CONVOLUTION’. Default: ‘CROSS_CORRELATION’

  • dtype – support for np.dtype, Default: np.int8

  • compute_mode (string or Convolution.ComputeMode.) – when set to “DEFAULT”, no special requirements will be placed on the precision of intermediate results. When set to “FLOAT32”, “Float32” would be used for accumulator and intermediate result, but only effective when input and output are of Float16 dtype.

Return type

Tensor

megengine.functional.tensor

megengine.functional.tensor.arange(start=0, stop=None, step=1, dtype='float32', device=None)[source]

Returns a tensor with values from start to stop with adjacent interval step.

Parameters
  • start (Union[int, float, Tensor]) – starting value of the squence, shoule be scalar.

  • stop (Union[int, float, Tensor, None]) – ending value of the squence, shoule be scalar.

  • step (Union[int, float, Tensor]) – gap between each pair of adjacent values. Default: 1

  • dtype – result data type.

Return type

Tensor

Returns

generated tensor.

Examples:

import numpy as np
import megengine.functional as F

a = F.arange(5)
print(a.numpy())

Outputs:

Outputs:

[0. 1. 2. 3. 4.]
megengine.functional.tensor.broadcast_to(inp, shape)[source]

Broadcasts a tensor to given shape.

Parameters
  • inp (Tensor) – input tensor.

  • shape (Union[int, Iterable[int]]) – target shape.

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

data = tensor(np.arange(0, 3, dtype=np.float32).reshape(3))
out = F.broadcast_to(data, (2, 3))
print(out.numpy())

Outputs:

[[0. 1. 2.]
 [0. 1. 2.]]
megengine.functional.tensor.concat(inps, axis=0, device=None)[source]

Concat some tensors

Parameters
  • inps (Iterable[Tensor]) – input tensors to concat.

  • axis (int) – over which dimension the tensors are concatenated. Default: 0

  • device – which device output will be. Default: None

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

data1 = tensor(np.arange(0, 6, dtype=np.float32).reshape((2, 3)))
data2 = tensor(np.arange(6, 12, dtype=np.float32).reshape((2, 3)))
out = F.concat([data1, data2])
print(out.numpy())

Outputs:

[[ 0.  1.  2.]
 [ 3.  4.  5.]
 [ 6.  7.  8.]
 [ 9. 10. 11.]]
megengine.functional.tensor.cond_take(mask, x)[source]

Takes elements from data if specific condition is satisfied on mask. This operator has two outputs: the first is the elements taken, and the second is the indices corresponding to those elements; they are both 1-dimensional. High-dimension input would first be flattened.

Parameters
  • mask (Tensor) – condition param; must be the same shape with data.

  • x (Tensor) – input tensor from which to take elements.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F
mask = tensor(np.array([[True, False], [False, True]], dtype=np.bool_))
x = tensor(np.array([[1, np.inf], [np.nan, 4]],
    dtype=np.float32))
v, index = F.cond_take(mask, x)
print(v.numpy(), index.numpy())

Outputs:

[1. 4.] [0 3]
Return type

Tensor

megengine.functional.tensor.expand_dims(inp, axis)[source]

Adds dimension before given axis.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int]]) – place of new axes.

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor([1, 2])
out = F.expand_dims(x, 0)
print(out.numpy().shape)

Outputs:

(1, 2)
megengine.functional.tensor.eye(N, M=None, *, dtype='float32', device=None)[source]

Returns a 2D tensor with ones on the diagonal and zeros elsewhere.

Parameters
  • shape – expected shape of output tensor.

  • dtype – data type. Default: None

  • device (Optional[CompNode]) – compute node of the matrix. Default: None

Return type

Tensor

Returns

eye matrix.

Examples:

import numpy as np
import megengine.functional as F

out = F.eye(4, 6, dtype=np.float32)
print(out.numpy())

Outputs:

[[1. 0. 0. 0. 0. 0.]
 [0. 1. 0. 0. 0. 0.]
 [0. 0. 1. 0. 0. 0.]
 [0. 0. 0. 1. 0. 0.]]
megengine.functional.tensor.flatten(inp, start_axis=0, end_axis=- 1)[source]

Reshapes the tensor by flattening the sub-tensor from dimension start_axis to dimension end_axis.

Parameters
  • inp (Tensor) – input tensor.

  • start_axis (int) – start dimension that the sub-tensor to be flattened. Default: 0

  • end_axis (int) – end dimension that the sub-tensor to be flattened. Default: -1

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

inp_shape = (2, 2, 3, 3)
x = tensor(
    np.arange(36, dtype=np.int32).reshape(inp_shape),
)
out = F.flatten(x, 2)
print(x.numpy().shape)
print(out.numpy().shape)

Outputs:

(2, 2, 3, 3)
(2, 2, 9)
megengine.functional.tensor.full(shape, value, dtype='float32', device=None)[source]

Returns a tensor with given shape and value.

megengine.functional.tensor.full_like(inp, value)[source]

Returns a tensor filled with given value with the same shape as input tensor.

Return type

Tensor

megengine.functional.tensor.gather(inp, axis, index)[source]

Gathers data from input tensor on axis using index.

For a 3-D tensor, the output is specified by:

out[i][j][k] = inp[index[i][j][k]][j][k] # if axis == 0
out[i][j][k] = inp[i][index[i][j][k]][k] # if axis == 1
out[i][j][k] = inp[i][j][index[i][j][k]] # if axis == 2

if input tensor is a n-dimensional tensor with size \((x_0,x_1,...,x_{i-1},x_i,x_{i+1},...,x_{n-1})\) and axis=i, then index must be a n-dimensional tensor with size \((x_0,x_1,...,x_{i-1},y,x_{i+1},...,x_{n-1})\) where \(y\ge 1\) and output will have the same size as index.

Parameters
  • inp (Tensor) – input tensor.

  • axis (int) – along which axis to index.

  • index (Tensor) – indices of elements to gather.

Return type

Tensor

Returns

output tensor.

Examples:

import megengine.functional as F
from megengine import tensor

inp = tensor([
    [1,2], [3,4], [5,6],
])
index = tensor([[0,2], [1,0]])
oup = F.gather(inp, 0, index)
print(oup.numpy())

Outputs:

[[1 6]
 [3 2]]
megengine.functional.tensor.linspace(start, stop, num, dtype='float32', device=None)[source]

Returns equally spaced numbers over a specified interval.

Parameters
  • start (Union[int, float, Tensor]) – starting value of the squence, shoule be scalar.

  • stop (Union[int, float, Tensor]) – last value of the squence, shoule be scalar.

  • num (Union[int, Tensor]) – number of values to generate.

  • dtype – result data type.

Return type

Tensor

Returns

generated tensor.

Examples:

import numpy as np
import megengine.functional as F

a = F.linspace(3,10,5)
print(a.numpy())

Outputs:

[ 3.    4.75  6.5   8.25 10.  ]
megengine.functional.tensor.ones(shape, dtype='float32', device=None)[source]

Returns a ones tensor with given shape.

Parameters

inp – input tensor.

Returns

output zero tensor.

Examples:

import megengine.functional as F

out = F.ones((2, 1))
print(out.numpy())

Outputs:

[[1.]
 [1.]]
megengine.functional.tensor.ones_like(inp)[source]

Returns a ones tensor with the same shape as input tensor.

Return type

Tensor

megengine.functional.tensor.reshape(inp, target_shape)[source]

Reshapes a tensor to given target shape; total number of logical elements must remain unchanged

Parameters
  • inp (Tensor) – input tensor.

  • target_shape (Iterable[int]) – target shape, it can contain an element of -1 representing unspec_axis.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F
x = tensor(np.arange(12, dtype=np.int32))
out = F.reshape(x, (3, 4))
print(out.numpy())

Outputs:

[[ 0  1  2  3]
 [ 4  5  6  7]
 [ 8  9 10 11]]
Return type

Tensor

megengine.functional.tensor.scatter(inp, axis, index, source)[source]

Writes all values from the tensor source into input tensor at the indices specified in the index tensor.

For each value in source, its output index is specified by its index in source for axis != dimension and by the corresponding value in index for axis = dimension.

For a 3-D tensor, input tensor is updated as:

inp[index[i][j][k]][j][k] = source[i][j][k]  # if axis == 0
inp[i][index[i][j][k]][k] = source[i][j][k]  # if axis == 1
inp[i][j][index[i][j][k]] = source[i][j][k]  # if axis == 2

inp, index and source should have same number of dimensions.

It is also required that source.shape(d) <= inp.shape(d) and index.shape(d) == source.shape(d) for all dimensions d.

Moreover, the values of index must be between 0 and inp.shape(axis) - 1 inclusive.

Note

Please notice that, due to performance issues, the result is uncertain on the GPU device if scattering different positions from source to the same destination position regard to index tensor.

Check the following examples, the oup[0][2] is maybe from source[0][2] which value is 0.2256 or source[1][2] which value is 0.5339 if set the index[1][2] from 1 to 0.

Parameters
  • inp (Tensor) – inp tensor which to be scattered.

  • axis (int) – axis along which to index.

  • index (Tensor) – indices of elements to scatter.

  • source (Tensor) – source element(s) to scatter.

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
import megengine.functional as F
from megengine import tensor

inp = tensor(np.zeros(shape=(3,5),dtype=np.float32))
source = tensor([[0.9935,0.9465,0.2256,0.8926,0.4396],[0.7723,0.0718,0.5939,0.357,0.4576]])
index = tensor([[0,2,0,2,1],[2,0,1,1,2]])
oup = F.scatter(inp, 0, index,source)
print(oup.numpy())

Outputs:

[[0.9935 0.0718 0.2256 0.     0.    ]
 [0.     0.     0.5939 0.357  0.4396]
 [0.7723 0.9465 0.     0.8926 0.4576]]
megengine.functional.tensor.split(inp, nsplits_or_sections, axis=0)[source]

Splits the input tensor into several smaller tensors. When nsplits_or_sections is int, the last tensor may be smaller than others.

Parameters
  • inp – input tensor.

  • nsplits_or_sections – number of sub tensors or sections information list.

  • axis – which axis will be splited.

Returns

output tensor list.

Examples:

import os
import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.random.random((10, 20)), dtype=np.float32)
y = F.split(x, 3)
z = F.split(x, [6, 17], axis=1)

if os.environ.get("MEGENGINE_USE_SYMBOLIC_SHAPE"):
    print([tuple(i.shape.numpy().tolist()) for i in y])
    print([tuple(i.shape.numpy().tolist()) for i in z])
else:
    print([i.shape for i in y])
    print([i.shape for i in z])

Outputs:

[(4, 20), (3, 20), (3, 20)]
[(10, 6), (10, 11), (10, 3)]
megengine.functional.tensor.squeeze(inp, axis=None)[source]

Removes dimension of shape 1.

Parameters
  • inp (Tensor) – input tensor.

  • axis (Union[int, Sequence[int], None]) – place of axis to be removed.

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor(np.array([1, 2], dtype=np.int32).reshape(1, 1, 2, 1))
out = F.squeeze(x, 3)
print(out.numpy().shape)

Outputs:

(1, 1, 2)
megengine.functional.tensor.stack(inps, axis=0, device=None)[source]

Concats a sequence of tensors along a new axis. The input tensors must have the same shape.

Parameters
  • inps – input tensors.

  • axis – which axis will be concatenated.

  • device – the device output will be. Default: None

Returns

output concatenated tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x1 = tensor(np.arange(0, 3, dtype=np.float32).reshape((3)))
x2 = tensor(np.arange(6, 9, dtype=np.float32).reshape((3)))
out = F.stack([x1, x2], axis=0)
print(out.numpy())

Outputs:

[[0. 1. 2.]
 [6. 7. 8.]]
megengine.functional.tensor.transpose(inp, pattern)[source]

Swaps shapes and strides according to given pattern.

Parameters
  • inp (Tensor) – input tensor.

  • pattern (Iterable[int]) – a list of integers including 0, 1, … , ndim-1,

and any number of 'x' char in dimensions where this tensor should be broadcasted. For examples:

  • ('x') -> make a 0d (scalar) into a 1d vector

  • (0, 1) -> identity for 2d vectors

  • (1, 0) -> inverts the first and second dimensions

  • ('x', 0) -> make a row out of a 1d vector (N to 1xN)

  • (0, 'x') -> make a column out of a 1d vector (N to Nx1)

  • (2, 0, 1) -> AxBxC to CxAxB

  • (0, 'x', 1) -> AxB to Ax1xB

  • (1, 'x', 0) -> AxB to Bx1xA

  • (1,) -> this removes dimensions 0. It must be a broadcastable dimension (1xA to A)

Return type

Tensor

Returns

output tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F
x = tensor(np.array([[1, 1], [0, 0]], dtype=np.int32))
out = F.transpose(x, (1, 0))
print(out.numpy())

Outputs:

[[1 0]
 [1 0]]
megengine.functional.tensor.where(mask, x, y)[source]

Selects elements either from Tensor x or Tensor y, according to mask.

\[\textrm{out}_i = x_i \textrm{ if } \textrm{mask}_i \textrm{ is True else } y_i\]
Parameters
  • mask (Tensor) – a mask used for choosing x or y.

  • x (Tensor) – first choice.

  • y (Tensor) – second choice.

Return type

Tensor

Returns

output tensor.

Examples:

from megengine import tensor
import megengine.functional as F
mask = tensor(np.array([[True, False], [False, True]], dtype=np.bool))
x = tensor(np.array([[1, np.inf], [np.nan, 4]],
    dtype=np.float32))
y = tensor(np.array([[5, 6], [7, 8]], dtype=np.float32))
out = F.where(mask, x, y)
print(out.numpy())

Outputs:

[[1. 6.]
 [7. 4.]]
megengine.functional.tensor.zeros(shape, dtype='float32', device=None)[source]

Returns a zero tensor with given shape.

megengine.functional.tensor.zeros_like(inp)[source]

Returns a zero tensor with the same shape as input tensor.

Parameters

inp (Tensor) – input tensor.

Return type

Tensor

Returns

output zero tensor.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

inp = tensor(np.arange(1, 7, dtype=np.int32).reshape(2,3))
out = F.zeros_like(inp)
print(out.numpy())

Outputs:

[[0 0 0]
 [0 0 0]]

megengine.functional.types

megengine.functional.types.get_ndtuple(value, *, n, allow_zero=True)[source]

Converts possibly 1D tuple to n-dim tuple.

Parameters
  • value – value will be filled in generated tuple.

  • n – how many elements will the tuple have.

  • allow_zero (bool) – whether to allow zero tuple value.

Returns

a tuple.

megengine.functional.utils

megengine.functional.utils.copy(inp, device=None)[source]

Copies tensor to another device.

Parameters
  • inp – input tensor.

  • device – destination device.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

x = tensor([1, 2, 3], np.int32)
y = F.copy(x, "xpu1")
print(y.numpy())

Outputs:

[1 2 3]
megengine.functional.utils.topk_accuracy(logits, target, topk=1)[source]

Calculates the classification accuracy given predicted logits and ground-truth labels.

Parameters
  • logits (Tensor) – model predictions of shape [batch_size, num_classes], representing the probability (likelyhood) of each class.

  • target (Tensor) – ground-truth labels, 1d tensor of int32.

  • topk (Union[int, Iterable[int]]) – specifies the topk values, could be an int or tuple of ints. Default: 1

Return type

Union[Tensor, Iterable[Tensor]]

Returns

tensor(s) of classification accuracy between 0.0 and 1.0.

Examples:

import numpy as np
from megengine import tensor
import megengine.functional as F

logits = tensor(np.arange(80, dtype=np.int32).reshape(8,10))
target = tensor(np.arange(8, dtype=np.int32))
top1, top5 = F.topk_accuracy(logits, target, (1, 5))
print(top1.numpy(), top5.numpy())

Outputs:

0.0 0.375