Embedding Layers#

TransformerEmbedding#

class supar.modules.pretrained.TransformerEmbedding(name: str, n_layers: int, n_out: int = 0, stride: int = 256, pooling: str = 'mean', pad_index: int = 0, mix_dropout: float = 0.0, finetune: bool = False)[source]#

Bidirectional transformer embeddings of words from various transformer architectures Devlin et al. (2019).

Parameters:
  • name (str) – Path or name of the pretrained models registered in transformers, e.g., 'bert-base-cased'.

  • n_layers (int) – The number of BERT layers to use. If 0, uses all layers.

  • n_out (int) – The requested size of the embeddings. If 0, uses the size of the pretrained embedding model. Default: 0.

  • stride (int) – A sequence longer than max length will be splitted into several small pieces with a window size of stride. Default: 10.

  • pooling (str) – Pooling way to get from token piece embeddings to token embedding. first: take the first subtoken. last: take the last subtoken. mean: take a mean over all. None: no reduction applied. Default: mean.

  • pad_index (int) – The index of the padding token in BERT vocabulary. Default: 0.

  • mix_dropout (float) – The dropout ratio of BERT layers. This value will be passed into the ScalarMix layer. Default: 0.

  • finetune (bool) – If True, the model parameters will be updated together with the downstream task. Default: False.

forward(tokens: Tensor) Tensor[source]#
Parameters:

tokens (Tensor) – [batch_size, seq_len, fix_len].

Returns:

Contextualized token embeddings of shape [batch_size, seq_len, n_out].

Return type:

Tensor

ELMoEmbedding#

class supar.modules.pretrained.ELMoEmbedding(name: str = 'original_5b', bos_eos: Tuple[bool, bool] = (True, True), n_out: int = 0, dropout: float = 0.5, finetune: bool = False)[source]#

Contextual word embeddings using word-level bidirectional LM Peters et al. (2018).

Parameters:
  • name (str) – The name of the pretrained ELMo registered in OPTION and WEIGHT. Default: 'original_5b'.

  • bos_eos (Tuple[bool]) – A tuple of two boolean values indicating whether to keep start/end boundaries of sentence outputs. Default: (True, True).

  • n_out (int) – The requested size of the embeddings. If 0, uses the default size of ELMo outputs. Default: 0.

  • dropout (float) – The dropout ratio for the ELMo layer. Default: 0.

  • finetune (bool) – If True, the model parameters will be updated together with the downstream task. Default: False.

forward(chars: LongTensor) Tensor[source]#
Parameters:

chars (LongTensor) – [batch_size, seq_len, fix_len].

Returns:

ELMo embeddings of shape [batch_size, seq_len, n_out].

Return type:

Tensor