PlanTL Project's Spanish-Catalan machine translation model
Table of Contents
- Model Description
- Intended Uses and Limitations
- How to Use
- Training
- Evaluation
- Additional Information
Model description
This model was trained from scratch using the Fairseq toolkit on a combination of Spanish-Catalan datasets, up to 92 million sentences. Additionally, the model is evaluated on several public datasecomprising 5 different domains (general, adminstrative, technology, biomedical, and news).
Intended uses and limitations
You can use this model for machine translation from Spanish to Catalan.
How to use
Usage
Required libraries:
pip install ctranslate2 pyonmttok
Translate a sentence using python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="PlanTL-GOB-ES/mt-plantl-es-ca", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Bienvenido al Proyecto PlanTL!")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
Training
Training data
The model was trained on a combination of the following datasets:
Dataset | Sentences | Tokens |
---|---|---|
DOCG v2 | 8.472.786 | 188.929.206 |
El Periodico | 6.483.106 | 145.591.906 |
EuroParl | 1.876.669 | 49.212.670 |
WikiMatrix | 1.421.077 | 34.902.039 |
Wikimedia | 335.955 | 8.682.025 |
QED | 71.867 | 1.079.705 |
TED2020 v1 | 52.177 | 836.882 |
CCMatrix v1 | 56.103.820 | 1.064.182.320 |
MultiCCAligned v1 | 2.433.418 | 48.294.144 |
ParaCrawl | 15.327.808 | 334.199.408 |
Total | 92.578.683 | 1.875.910.305 |
Training procedure
Data preparation
All datasets are concatenated and filtered using the mBERT Gencata parallel filter and cleaned using the clean-corpus-n.pl script from moses, allowing sentences between 5 and 150 words.
Before training, the punctuation is normalized using a modified version of the join-single-file.py script from SoftCatalà
Tokenization
All data is tokenized using sentencepiece, with 50 thousand token sentencepiece model learned from the combination of all filtered training data. This model is included.
Hyperparameters
The model is based on the Transformer-XLarge proposed by Subramanian et al. The following hyperparamenters were set on the Fairseq toolkit:
Hyperparameter | Value |
---|---|
Architecture | transformer_vaswani_wmt_en_de_bi |
Embedding size | 1024 |
Feedforward size | 4096 |
Number of heads | 16 |
Encoder layers | 24 |
Decoder layers | 6 |
Normalize before attention | True |
--share-decoder-input-output-embed | True |
--share-all-embeddings | True |
Effective batch size | 96.000 |
Optimizer | adam |
Adam betas | (0.9, 0.980) |
Clip norm | 0.0 |
Learning rate | 1e-3 |
Lr. schedurer | inverse sqrt |
Warmup updates | 4000 |
Dropout | 0.1 |
Label smoothing | 0.1 |
The model was trained using shards of 10 million sentences, for a total of 8.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 6 checkpoints.
Evaluation
Variable and metrics
We use the BLEU score for evaluation on test sets: Flores-101, TaCon, United Nations, Cybersecurity, wmt19 biomedical test set, wmt13 news test set
Evaluation results
Below are the evaluation results on the machine translation from Spanish to Catalan compared to Softcatalà and Google Translate:
Test set | SoftCatalà | Google Translate | mt-plantl-es-ca |
---|---|---|---|
Spanish Constitution | 63,6 | 61,7 | 63,0 |
United Nations | 73,8 | 74,8 | 74,9 |
Flores 101 dev | 22 | 23,1 | 22,5 |
Flores 101 devtest | 22,7 | 23,6 | 23,1 |
Cybersecurity | 61,4 | 69,5 | 67,3 |
wmt 19 biomedical | 60,2 | 59,7 | 60,6 |
wmt 13 news | 21,3 | 22,4 | 22,0 |
Average | 46,4 | 47,8 | 47,6 |
Additional information
Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
Contact information
For further information, send an email to plantl-gob-es@bsc.es
Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
Licensing information
This work is licensed under a Apache License, Version 2.0
Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)
Disclaimer
Click to expand
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for Digitalization and Artificial Intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.