megengine.functional.distributed.reduce_sum¶
- reduce_sum(inp, group=WORLD, device=None)[源代码]¶
Reduce tensor data across the specified group by sum. Only root process will receive the final result.
- 参数
inp (
Tensor
) – Input tensor.group (
Optional
[Group
]) – The process group to work on. The default group is WORLD which means all processes available. You can use a list of process ranks to create new group to work on it, e.g. [1, 3, 5].device (
Optional
[str
]) – The specific device to execute this operator. None default device means the device of inp will be used. Specify “gpu0:1” to execute this operator on diffrent cuda stream, 1 is stream id, and default stream id is 0.
- 返回类型
- 返回
Reduced tensor if in root process, None in other processes.
实际案例
input = Tensor([rank]) # Rank 0 # input: Tensor([0]) # Rank 1 # input: Tensor([1]) output = reduce_sum(input) # Rank 0 # output: Tensor([1]) # Rank 1 # output: None input = Tensor([rank]) group = Group([1, 0]) # first rank is root output = reduce_sum(input, group) # Rank 0 # output: None # Rank 1 # output: Tensor([1])