Edit model card

SELECTRA: A Spanish ELECTRA

SELECTRA is a Spanish pre-trained language model based on ELECTRA. We release a small and medium version with the following configuration:

Model Layers Embedding/Hidden Size Params Vocab Size Max Sequence Length Cased
SELECTRA small 12 256 22M 50k 512 True
SELECTRA medium 12 384 41M 50k 512 True

SELECTRA small (medium) is about 5 (3) times smaller than BETO but achieves comparable results (see Metrics section below).

Usage

From the original ELECTRA model card: "ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN." The discriminator should therefore activate the logit corresponding to the fake input token, as the following example demonstrates:

from transformers import ElectraForPreTraining, ElectraTokenizerFast

discriminator = ElectraForPreTraining.from_pretrained("Recognai/selectra_small")
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small")

sentence_with_fake_token = "Estamos desayunando pan rosa con tomate y aceite de oliva."

inputs = tokenizer.encode(sentence_with_fake_token, return_tensors="pt")
logits = discriminator(inputs).logits.tolist()[0]

print("\t".join(tokenizer.tokenize(sentence_with_fake_token)))
print("\t".join(map(lambda x: str(x)[:4], logits[1:-1])))
"""Output:
Estamos desayun ##ando  pan     rosa    con     tomate  y       aceite  de      oliva   .
-3.1    -3.6    -6.9    -3.0    0.19    -4.5    -3.3    -5.1    -5.7    -7.7    -4.4    -4.2
"""

However, you probably want to use this model to fine-tune it on a downstream task. We provide models fine-tuned on the XNLI dataset, which can be used together with the zero-shot classification pipeline:

Metrics

We fine-tune our models on 3 different down-stream tasks:

For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below. To compare our results to other Spanish language models, we provide the same metrics taken from the evaluation table of the Spanish Language Model repo.

Model CoNLL2002 - NER (f1) PAWS-X (acc) XNLI (acc) Params
SELECTRA small 0.865 +- 0.004 0.896 +- 0.002 0.784 +- 0.002 22M
SELECTRA medium 0.873 +- 0.003 0.896 +- 0.002 0.804 +- 0.002 41M
mBERT 0.8691 0.8955 0.7876 178M
BETO 0.8759 0.9000 0.8130 110M
RoBERTa-b 0.8851 0.9000 0.8016 125M
RoBERTa-l 0.8772 0.9060 0.7958 355M
Bertin 0.8835 0.8990 0.7890 125M
ELECTRICIDAD 0.7954 0.9025 0.7878 109M

Some details of our fine-tuning runs:

  • epochs: 5
  • batch-size: 32
  • learning rate: 1e-4
  • warmup proportion: 0.1
  • linear learning rate decay
  • layerwise learning rate decay

For all the details, check out our selectra repo.

Training

We pre-trained our SELECTRA models on the Spanish portion of the Oscar dataset, which is about 150GB in size. Each model version is trained for 300k steps, with a warm restart of the learning rate after the first 150k steps. Some details of the training:

  • steps: 300k
  • batch-size: 128
  • learning rate: 5e-4
  • warmup steps: 10k
  • linear learning rate decay
  • TPU cores: 8 (v2-8)

For all details, check out our selectra repo.

Note: Due to a misconfiguration in the pre-training scripts the embeddings of the vocabulary containing an accent were not optimized. If you fine-tune this model on a down-stream task, you might consider using a tokenizer that does not strip the accents:

tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small", strip_accents=False)

Motivation

Despite the abundance of excellent Spanish language models (BETO, BSC-BNE, Bertin, ELECTRICIDAD, etc.), we felt there was still a lack of distilled or compact Spanish language models and a lack of comparing those to their bigger siblings.

Acknowledgment

This research was supported by the Google TPU Research Cloud (TRC) program.

Authors

Downloads last month
19
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train Recognai/selectra_small