Template Class TensorND

Class Documentation

template<class TensorStorage>
class mgb::TensorND

n-dimensional tensor

Note that TensorND is built on TensorStorage, which has some lazy behavior.

Public Types

using ChainReturnType = TensorND<TensorStorage>

Public Functions

TensorND()
TensorND(CompNode node)
TensorND(DType dtype)
TensorND(CompNode node, DType dtype)
TensorND(CompNode node, const TensorShape &shape, DType dtype = dtype::Float32{}, TensorFormat format = {})

allocate contiguous tensor

TensorND(CompNode node, const TensorLayout &layout)

allocate contiguous tensor from given comp node and layout; layout is required to be contiguous, and its dtype and format would be used

ChainReturnType operator[](std::initializer_list<Slice> slice) const

get subtensor according to given slices

ChainReturnType sub(const SubTensorSpec &spec) const

get subtensor according to spec

bool empty() const

whether underlying storage is empty

bool shape_valid() const

whether tensor shape is valid (i.e. ndim != 0)

const TensorShape &shape() const
const TensorLayout &layout() const
size_t shape(size_t dim) const

shape at given dimension, with boundary check

template<typename T, typename Iter>
T *ptr(Iter idx_begin, Iter idx_end)

get ptr at given index

template<typename T>
T *ptr(std::initializer_list<size_t> idx)
template<typename T>
const T *ptr(std::initializer_list<size_t> dim) const
template<typename T>
T *ptr() const

get ptr of buffer start; T must match dtype

dt_byte *raw_ptr() const
ChainReturnType &resize(const TensorShape &shape)

change the shape without retaining old data, and initialize as contiguous stride

dtype and format would not be changed

ChainReturnType &reset(TensorStorage storage, const TensorLayout &layout)

totally reset the tensor to given storage and layout

ChainReturnType &comp_node(CompNode comp_node, bool allow_mem_node_change = false)

change comp node; see TensorStorage::comp_node()

CompNode comp_node() const
const TensorStorage &storage() const
ChainReturnType &storage(const TensorStorage &storage)

change the storage and invalidate all data, resulting in an empty tensor

DType dtype() const

get data type

TensorFormat format() const

get tensor format

ChainReturnType &dtype(DType dtype)

change underlying dtype

layout would be cleared (reset to ndim=0) if dtype actually changes

ChainReturnType &format(TensorFormat format)

change underlying tensor format

layout would be cleared (reset to ndim=0) if format actually changes

template<class RStorage>
ChainReturnType &copy_from(const TensorND<RStorage> &src)

copy from another tensor and initialize contiguous layout

Note:

  1. If the computing node is empty, it would be copied from src

  2. To copy from device to host, if the two tensors reside on different computing nodes, the caller is responsible to perform sync before copying; a better way is to set empty computing node to host tensor.

  3. For cross-device copy: copy would be synced on comp node of this, and the caller is responsible to sync this comp node with src comp node.

  4. If dtype is valid, it would be checked to match the dtype of src.

  5. Format would be reset to default and layout would be initialized to be contiguous.

template<class RStorage>
const ChainReturnType &copy_from_fixlayout(const TensorND<RStorage> &src) const

copy from another tensor of the same shape, retaining current layout

If storage type of src and this are different and src is not contiguous, a temporary storage would be allocated to first make src contiguous.

template<class RStorage>
ChainReturnType &copy_from_fixlayout(const TensorND<RStorage> &src)

non-const version of copy_from_fixlayout

megdnn::TensorND as_megdnn() const

convert to megdnn::TensorND

const ChainReturnType &sync() const

block host thread to synchronize with the CompNode

ChainReturnType &sync()
template<bool x = true, typename = std::enable_if_t<x && std::is_same<TensorStorage, HostTensorStorage>::value>>
DeviceTensorND proxy_to_default_cpu() const

similar to HostTensorStorage::proxy_to_default_cpu

Public Static Functions

template<class RStorage, typename = typename std::enable_if<!std::is_same<TensorStorage, RStorage>::value>::type>
ChainReturnType make_proxy(const TensorND<RStorage> &src)

similar to TensorStorage<>::make_proxy