patrickvonplaten's picture
Add `opus-mt-tc` tag (#1)
54c6c21
|
raw
history blame
6.78 kB
metadata
language:
  - br
  - cel
  - cy
  - en
  - ga
tags:
  - translation
  - opus-mt-tc
license: cc-by-4.0
model-index:
  - name: opus-mt-tc-big-cel-en
    results:
      - task:
          name: Translation cym-eng
          type: translation
          args: cym-eng
        dataset:
          name: flores101-devtest
          type: flores_101
          args: cym eng devtest
        metrics:
          - name: BLEU
            type: bleu
            value: 50.2
      - task:
          name: Translation gle-eng
          type: translation
          args: gle-eng
        dataset:
          name: flores101-devtest
          type: flores_101
          args: gle eng devtest
        metrics:
          - name: BLEU
            type: bleu
            value: 37.4
      - task:
          name: Translation bre-eng
          type: translation
          args: bre-eng
        dataset:
          name: tatoeba-test-v2021-08-07
          type: tatoeba_mt
          args: bre-eng
        metrics:
          - name: BLEU
            type: bleu
            value: 36.1
      - task:
          name: Translation cym-eng
          type: translation
          args: cym-eng
        dataset:
          name: tatoeba-test-v2021-08-07
          type: tatoeba_mt
          args: cym-eng
        metrics:
          - name: BLEU
            type: bleu
            value: 53.6
      - task:
          name: Translation gle-eng
          type: translation
          args: gle-eng
        dataset:
          name: tatoeba-test-v2021-08-07
          type: tatoeba_mt
          args: gle-eng
        metrics:
          - name: BLEU
            type: bleu
            value: 57.7

opus-mt-tc-big-cel-en

Neural machine translation model for translating from Celtic languages (cel) to English (en).

This model is part of the OPUS-MT project, an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of Marian NMT, an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from OPUS and training pipelines use the procedures of OPUS-MT-train.

@inproceedings{tiedemann-thottingal-2020-opus,
    title = "{OPUS}-{MT} {--} Building open translation services for the World",
    author = {Tiedemann, J{\"o}rg  and Thottingal, Santhosh},
    booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
    month = nov,
    year = "2020",
    address = "Lisboa, Portugal",
    publisher = "European Association for Machine Translation",
    url = "https://aclanthology.org/2020.eamt-1.61",
    pages = "479--480",
}

@inproceedings{tiedemann-2020-tatoeba,
    title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
    author = {Tiedemann, J{\"o}rg},
    booktitle = "Proceedings of the Fifth Conference on Machine Translation",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.wmt-1.139",
    pages = "1174--1182",
}

Model info

Usage

A short example code:

from transformers import MarianMTModel, MarianTokenizer

src_text = [
    "A-du emaoc’h?",
    "Ta'n ushtey glen."
]

model_name = "pytorch-models/opus-mt-tc-big-cel-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))

for t in translated:
    print( tokenizer.decode(t, skip_special_tokens=True) )

# expected output:
#     Is that you?
#     Ta'n ushtey glen.

You can also use OPUS-MT models with the transformers pipelines, for example:

from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-cel-en")
print(pipe("A-du emaoc’h?"))

# expected output: Is that you?

Benchmarks

langpair testset chr-F BLEU #sent #words
bre-eng tatoeba-test-v2021-08-07 0.53712 36.1 383 2065
cym-eng tatoeba-test-v2021-08-07 0.69239 53.6 818 5563
gle-eng tatoeba-test-v2021-08-07 0.72087 57.7 1913 11190
cym-eng flores101-devtest 0.71379 50.2 1012 24721
gle-eng flores101-devtest 0.63946 37.4 1012 24721

Acknowledgements

The work is supported by the European Language Grid as pilot project 2866, by the FoTran project, funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the MeMAD project, funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by CSC -- IT Center for Science, Finland.

Model conversion info

  • transformers version: 4.16.2
  • OPUS-MT git hash: 3405783
  • port time: Wed Apr 13 18:36:25 EEST 2022
  • port machine: LM0-400-22516.local