File size: 3,020 Bytes
c8149ef
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
language: en
tags:
- translation

license: apache-2.0
---

### eng-alv

* source group: English 
* target group: Atlantic-Congo languages 
*  OPUS readme: [eng-alv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md)

*  model: transformer
* source language(s): eng
* target language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.eval.txt)

## Benchmarks

| testset               | BLEU  | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng-ewe.eng.ewe 	| 4.9 	| 0.212 |
| Tatoeba-test.eng-ful.eng.ful 	| 0.6 	| 0.079 |
| Tatoeba-test.eng-ibo.eng.ibo 	| 3.5 	| 0.255 |
| Tatoeba-test.eng-kin.eng.kin 	| 10.5 	| 0.510 |
| Tatoeba-test.eng-lin.eng.lin 	| 1.1 	| 0.273 |
| Tatoeba-test.eng-lug.eng.lug 	| 5.3 	| 0.340 |
| Tatoeba-test.eng.multi 	| 11.4 	| 0.429 |
| Tatoeba-test.eng-nya.eng.nya 	| 18.1 	| 0.595 |
| Tatoeba-test.eng-run.eng.run 	| 13.9 	| 0.484 |
| Tatoeba-test.eng-sag.eng.sag 	| 5.3 	| 0.194 |
| Tatoeba-test.eng-sna.eng.sna 	| 26.2 	| 0.623 |
| Tatoeba-test.eng-swa.eng.swa 	| 1.0 	| 0.141 |
| Tatoeba-test.eng-toi.eng.toi 	| 7.0 	| 0.224 |
| Tatoeba-test.eng-tso.eng.tso 	| 46.7 	| 0.643 |
| Tatoeba-test.eng-umb.eng.umb 	| 7.8 	| 0.359 |
| Tatoeba-test.eng-wol.eng.wol 	| 6.8 	| 0.191 |
| Tatoeba-test.eng-xho.eng.xho 	| 27.1 	| 0.629 |
| Tatoeba-test.eng-yor.eng.yor 	| 17.4 	| 0.356 |
| Tatoeba-test.eng-zul.eng.zul 	| 34.1 	| 0.729 |


### System Info: 
- hf_name: eng-alv

- source_languages: eng

- target_languages: alv

- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-alv/README.md

- original_repo: Tatoeba-Challenge

- tags: ['translation']

- prepro:  normalization + SentencePiece (spm32k,spm32k)

- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.zip

- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-alv/opus2m-2020-08-01.test.txt

- src_alpha3: eng

- tgt_alpha3: alv

- short_pair: en-alv

- chrF2_score: 0.429

- bleu: 11.4

- brevity_penalty: 1.0

- ref_len: 10603.0

- src_name: English

- tgt_name: Atlantic-Congo languages

- train_date: 2020-08-01

- src_alpha2: en

- tgt_alpha2: alv

- prefer_old: False

- long_pair: eng-alv

- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535

- transformers_git_sha: 46e9f53347bbe9e989f0335f98465f30886d8173

- port_machine: brutasse

- port_time: 2020-08-18-01:48