File size: 3,696 Bytes
d00d112
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
language:
- en
- fr
tags:
- translation
license: cc-by-4.0
datasets:
- quickmt/quickmt-train.fr-en
model-index:
- name: quickmt-fr-en
  results:
  - task:
      name: Translation fra-eng
      type: translation
      args: fra-eng
    dataset:
      name: flores101-devtest
      type: flores_101
      args: fra_Latn eng_Latn devtest
    metrics:
    - name: CHRF
      type: chrf
      value: 66.77
    - name: BLEU
      type: bleu
      value: 42.17
    - name: COMET
      type: comet
      value: 58.10
---


# `quickmt-fr-en` Neural Machine Translation Model 

`quickmt-fr-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `fr` into `en`.


## Model Information

* Trained using [`eole`](https://github.com/eole-nlp/eole)
* 185M parameter transformer 'big' with 8 encoder layers and 2 decoder layers
* 50k joint Sentencepiece vocabulary
* Exported for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format
* Training data: https://huggingface.co/datasets/quickmt/quickmt-train.fr-en/tree/main

See the `eole` model configuration in this repository for further details. 


## Usage with `quickmt`

You must install the Nvidia cuda toolkit first, if you want to do GPU inference.

Next, install the `quickmt` python library and download the model:

```bash
git clone https://github.com/quickmt/quickmt.git
pip install ./quickmt/

quickmt-model-download quickmt/quickmt-fr-en ./quickmt-fr-en
```

Finally use the model in python:

```python
from quickmt import Translator

# Auto-detects GPU, set to "cpu" to force CPU inference
t = Translator("./quickmt-fr-en/", device="auto")

# Translate - set beam size to 5 for higher quality (but slower speed)
sample_text = "Résigny est une commune française située dans le département de l'Aisne, en région Hauts-de-France. "
t(sample_text, beam_size=1)

# Get alternative translations by sampling
# You can pass any cTranslate2 `translate_batch` arguments
t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9)
```

The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible  to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`.


## Metrics

`bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("fra_Latn"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate (using `ctranslate2`) the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a large batch size).

| Model                            | chrf2 | bleu    | comet22 | Time (s) |
| -------------------------------- | ----- | ------- | ------- | -------- |
| quickmt/quickmt-fr-en            | 68.22 | 44.28   | 88.86   |  1.1     |
| Helsinki-NLP/opus-mt-fr-en       | 66.85 | 41.71   | 88.31   |  3.6     |
| facebook/m2m100_418M             | 64.39 | 36.49   | 85.87   | 18.0     |
| facebook/m2m100_1.2B             | 66.51 | 41.69   | 88.00   | 34.6     |
| facebook/nllb-200-distilled-600M | 67.82 | 44.04   | 88.47   | 21.7     |
| facebook/nllb-200-distilled-1.3B | 69.30 | 46.22   | 89.24   | 37.1     |

`quickmt-fr-en` is the fastest and is higher quality than `opus-mt-fr-en`, `m2m100_418m`, `m2m100_1.2B` and `nllb-200-distilled-600M`.