Linear Layers#

MLP#

class supar.modules.mlp.MLP(n_in: int, n_out: int, dropout: float = 0.0, activation: bool = True)[source]#

Applies a linear transformation together with a non-linear activation to the incoming tensor: \(y = \mathrm{Activation}(x A^T + b)\)

Parameters:
  • n_in (Tensor) – The size of each input feature.

  • n_out (Tensor) – The size of each output feature.

  • dropout (float) – If non-zero, introduces a SharedDropout layer on the output with this dropout ratio. Default: 0.

  • activation (bool) – Whether to use activations. Default: True.

forward(x: Tensor) Tensor[source]#
Parameters:

x (Tensor) – The size of each input feature is n_in.

Returns:

A tensor with the size of each output feature n_out.