How to perform scientific calculations with Tenosr#
The API that can be called in the form of ``megengine.functional.xxx’’ is considered to be a general-purpose Tensor operation and is responsible for providing common scientific operation interfaces. The API design of this part is as close as possible to the NumPy API. All APIs can be found in general-tensor-operations.
According to the requirements and impact on the shape of Tensor, we can divide these operations into the following categories:
See also
Not all computing interfaces in NumPy provide corresponding MegEngine implementations, but when processing data, you can choose to call NumPy implementation to obtain Convert NumPy ndarray to MegEngine Tensor;
If you don’t understand the usage of some APIs, you can check the introduction of corresponding API usage in NumPy.
Element-wise operations (Element-wise)#
Element-level operations are the most common category of Tensor operations. According to the difference in operands, it can either refer to the same operation (ie unary operation) on the elements at each position in the Tensor, or it can refer to different Tensors. Corresponding elements between each other are operated on one by one (that is, binary or multiple operations), and these operations themselves can be roughly divided into:
Arithmetic operations (addition, subtraction, multiplication, division, etc., refer to arithmetic-operations)
Trigonometric functions and inverse trigonometric functions (refer to trigonometric-functions and hyperbolic-functions)
Bit operations (refer to bit-operations)
Logic operation (refer to logic-functions)
In neural network operations, there are also many operations at the element level, such as activation functions: py:func:~.relu and so on.
Element level meaning#
If two elements occupy the same position in their respective the Tensor, then we can call these two elements are corresponding to the positions where the elements for positioning by each element index determined. We use the following two Tensor a
and b
as example:
>>> a = megengine.Tensor([[1., 2.], [3., 4.]])
>>> b = megengine.Tensor([[9., 8.], [7., 6.]])
We use the same index ``[0][0]’’ to get the elements:
>>> a[0][0]
Tensor(1.0, device=xpux:0)
>>> b[0][0]
Tensor(9.0, device=xpux:0)
It can be found that the element with the value of 1 in a
corresponds to the element with the value of 9 in b
. The elements in the other 3 positions also correspond respectively.
Note
Correspondence is defined by the same index, which indicates that Tensors must have the same shape to perform operations between elements.
Taking addition as an example, we can regard it as a matrix addition between two matrices.:
>>> a + b
Tensor([[10.0 10.0]
[10.0 100.]], dtype=int32, device=xpux:0)
Warning
It can be the element level calculation between two Tensor not exactly the same shape, if the shape of two mutually Tensor “compatible”, may be broadcast (Broadcast) <tensor-broadcasting>to be the same shape as calculation. This mechanism makes Tensor calculation very flexible.
See also
People also use terms such as Component-wise / Point-wise to refer to element-level operations.
Comparison with matrix operations#
Similar to +
, the use of *
can be used to calculate the multiplication of the corresponding elements of the matrix, also called Hadamard product (Hadamard product):
>>> a = megengine.Tensor([[1., 2.], [3., 4.]])
>>> b = megengine.Tensor([[9., 8.], [7., 6.]])
>>> a * b
Tensor([[ 9. 16.]
[21. 24.]], device=xpux:0)
Warning
**Different frameworks and libraries have different definitions for some operators. ** In Matlab, use .*
and .^
to indicate element-level multiplication and power, use *
and ^
to indicate matrix multiplication and power, please refer to the official website for explanation: Array vs. Matrix Operations
Some people mistake *'' for matrix multiplication: py:func:`~.matmul`, in fact the operator corresponding to matrix multiplication in MegEngine is ``@
.
>>> a @ b
Tensor([[23. 20.]
[55. 48.]], device=xpux:0)
It corresponds to: py:mod:functional provided in the module: py:func:~.matmul interface:
>>> megengine.functional.matmul(a, b)
Tensor([[23. 20.]
[55. 48.]], device=xpux:0)
See also
For more operations related to linear algebra, please refer to linear-algebra-functions.
Reduction#
Note
Reduction operations can reduce the number of elements in a Tensor.
We can understand it as dimensionality reduction in a statistical sense.
One of the simplest examples is to sum the elements in Tensor, using: py:func:~.sum interface:
>>> a = megengine.Tensor([[1, 2, 3], [4, 5, 6]])
>>> b = megengine.megengine.functional.sum(a)
Tensor(21, dtype=int32, device=xpux:0)
>>> print(a.shape, b.shape)
(2, 3) ()
As you can see, after summing a Tensor of shape ``(2, 3)’’, we get a 0-dimensional Tensor.
Warning
The reduction operation does not always reduce the input Tensor to a 0-dimensional Tensor with a single element. When the
axis
parameter is passed in and it is not None, you can request reduction along the axis, refer to :ref:` axis-argument`;We can also keep the dimensions before and after the reduction operation unchanged by setting the parameter ``keepdims=True’’.
See also
Common Tensor reduction operations include:
prod
/mean
, etc. You can find related APIs and routines :ref:To learn more about the statute, you can refer to <https://en.wikipedia.org/wiki/Reduction_operator>`_ in Wikipedia.