megenginelite.tensor

class LiteTensor(layout=None, device_type=LiteDeviceType.LITE_CPU, device_id=0, is_pinned_host=False, shapes=None, dtype=None, physic_construct=True)[source]

Description of a block of data with neccessary information.

Parameters
  • layout – layout of Tensor

  • device_type – device type of Tensor

  • device_id – device id of Tensor

  • is_pinned_host – when set, the storage memory of the tensor is pinned memory. This is used to Optimize the H2D or D2H memory copy, if the device or layout is not set, when copy form other device(CUDA) tensor, this tensor will be automatically set to pinned tensor

  • shapes – the shape of data

  • dtype – data type

Note

Dims of shape should be less than 8. The supported data type defines at LiteDataType

copy_from(src_tensor)[source]

copy memory form the src_tensor

Parameters

src_tensor – source tensor

property device_id

get device id of the tensor

property device_type

get device type of the tensor

fill_zero()[source]

fill the buffer memory with zero

get_ctypes_memory()[source]

get the memory of the tensor, return c_void_p of the tensor memory

get_data_by_share()[source]
get the data in the tensor, add share the data with a new numpy, and

return the numpy arrray

Note

Be careful, the data in numpy is valid before the tensor memory is

write again, such as LiteNetwok forward next time.

property is_continue

whether the tensor memory is continue

property is_pinned_host

whether the tensor is pinned tensor

property layout
property nbytes

get the length of the meomry in byte

reshape(shape)[source]

reshape the tensor with data not change.

Parameters

shape – target shape

set_data_by_copy(data, data_length=0, layout=None)[source]

copy the data to the tensor

Parameters
  • data – the data to copy to tensor, it should be list, numpy.ndarraya or ctypes with length

  • data_length – length of data in bytes

  • layout – layout of data

set_data_by_share(data, length=0, layout=None)[source]

share the data to the tensor

Parameters

data – the data will shared to the tensor, it should be a numpy.ndarray or ctypes data

share_memory_with(src_tensor)[source]
share the same memory with the src_tensor, the self memory will be

freed

Parameters

src_tensor – the source tensor that will share memory with this tensor

slice(start, end, step=None)[source]

slice the tensor with gaven start, end, step

Parameters
  • start – silce begin index of each dim

  • end – silce end index of each dim

  • step – silce step of each dim

to_numpy()[source]

get the buffer of the tensor

update()[source]

update the member from C, this will auto used after slice, share

class LiteLayout(shape=None, dtype=None)[source]
Description of layout using in Lite. A Lite layout will be totally defined

by shape and data type.

Parameters
  • shape – the shape of data.

  • dtype – data type.

Note

Dims of shape should be less than 8. The supported data type defines at LiteDataType

Examples

import numpy as np
layout = LiteLayout([1, 4, 8, 8], LiteDataType.LITE_FLOAT)
assert(layout.shape()) == [1, 4, 8, 8]
assert(layout.dtype()) == LiteDataType.LITE_FLOAT
data_type

Structure/Union member

property dtype
ndim

Structure/Union member

property shapes