Embedding¶
- class Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=None, initial_weight=None, freeze=False, **kwargs)[source]¶
A simple lookup table that stores embeddings of a fixed dictionary and size.
This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings. The indices should less than num_embeddings.
- Parameters
num_embeddings (
int
) – size of embedding dictionary.embedding_dim (
int
) – size of each embedding vector.padding_idx (
Optional
[int
]) – should be set to None, not supportted now.max_norm (
Optional
[float
]) – should be set to None, not supportted now.norm_type (
Optional
[float
]) – should be set to None, not supportted now.initial_weight (
Optional
[Parameter
]) – the learnable weights of the module of shape (num_embeddings, embedding_dim).
Examples
>>> import numpy as np >>> weight = mge.tensor(np.array([(1.2,2.3,3.4,4.5,5.6)], dtype=np.float32)) >>> data = mge.tensor(np.array([(0,0)], dtype=np.int32)) >>> embedding = M.Embedding(1, 5, initial_weight=weight) >>> output = embedding(data) >>> with np.printoptions(precision=6): ... print(output.numpy()) [[[1.2 2.3 3.4 4.5 5.6] [1.2 2.3 3.4 4.5 5.6]]]
- classmethod from_pretrained(embeddings, freeze=True, padding_idx=None, max_norm=None, norm_type=None)[source]¶
Creates Embedding instance from given 2-dimensional FloatTensor.
- Parameters
embeddings (
Parameter
) – tensor contained weight for the embedding.freeze (
Optional
[bool
]) – ifTrue
, the weight does not get updated during the learning process. Default: True.padding_idx (
Optional
[int
]) – should be set to None, not support Now.max_norm (
Optional
[float
]) – should be set to None, not support Now.norm_type (
Optional
[float
]) – should be set to None, not support Now.
Examples
>>> import numpy as np >>> weight = mge.tensor(np.array([(1.2,2.3,3.4,4.5,5.6)], dtype=np.float32)) >>> data = mge.tensor(np.array([(0,0)], dtype=np.int32)) >>> embedding = M.Embedding.from_pretrained(weight, freeze=False) >>> output = embedding(data) >>> output.numpy() array([[[1.2, 2.3, 3.4, 4.5, 5.6], [1.2, 2.3, 3.4, 4.5, 5.6]]], dtype=float32)