tiedeman commited on
Commit
f689e55
1 Parent(s): 1268b03

Initial commit

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.spm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - be
4
+ - es
5
+ - ru
6
+ - uk
7
+ - zle
8
+
9
+ tags:
10
+ - translation
11
+
12
+ license: cc-by-4.0
13
+ model-index:
14
+ - name: opus-mt-tc-big-es-zle
15
+ results:
16
+ - task:
17
+ name: Translation spa-rus
18
+ type: translation
19
+ args: spa-rus
20
+ dataset:
21
+ name: flores101-devtest
22
+ type: flores_101
23
+ args: spa rus devtest
24
+ metrics:
25
+ - name: BLEU
26
+ type: bleu
27
+ value: 20.2
28
+ - task:
29
+ name: Translation spa-bel
30
+ type: translation
31
+ args: spa-bel
32
+ dataset:
33
+ name: tatoeba-test-v2021-08-07
34
+ type: tatoeba_mt
35
+ args: spa-bel
36
+ metrics:
37
+ - name: BLEU
38
+ type: bleu
39
+ value: 27.5
40
+ - task:
41
+ name: Translation spa-rus
42
+ type: translation
43
+ args: spa-rus
44
+ dataset:
45
+ name: tatoeba-test-v2021-08-07
46
+ type: tatoeba_mt
47
+ args: spa-rus
48
+ metrics:
49
+ - name: BLEU
50
+ type: bleu
51
+ value: 49.0
52
+ - task:
53
+ name: Translation spa-ukr
54
+ type: translation
55
+ args: spa-ukr
56
+ dataset:
57
+ name: tatoeba-test-v2021-08-07
58
+ type: tatoeba_mt
59
+ args: spa-ukr
60
+ metrics:
61
+ - name: BLEU
62
+ type: bleu
63
+ value: 42.3
64
+ - task:
65
+ name: Translation spa-rus
66
+ type: translation
67
+ args: spa-rus
68
+ dataset:
69
+ name: newstest2012
70
+ type: wmt-2012-news
71
+ args: spa-rus
72
+ metrics:
73
+ - name: BLEU
74
+ type: bleu
75
+ value: 24.6
76
+ - task:
77
+ name: Translation spa-rus
78
+ type: translation
79
+ args: spa-rus
80
+ dataset:
81
+ name: newstest2013
82
+ type: wmt-2013-news
83
+ args: spa-rus
84
+ metrics:
85
+ - name: BLEU
86
+ type: bleu
87
+ value: 26.9
88
+ ---
89
+ # opus-mt-tc-big-es-zle
90
+
91
+ Neural machine translation model for translating from Spanish (es) to East Slavic languages (zle).
92
+
93
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
94
+
95
+ * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
96
+
97
+ ```
98
+ @inproceedings{tiedemann-thottingal-2020-opus,
99
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
100
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
101
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
102
+ month = nov,
103
+ year = "2020",
104
+ address = "Lisboa, Portugal",
105
+ publisher = "European Association for Machine Translation",
106
+ url = "https://aclanthology.org/2020.eamt-1.61",
107
+ pages = "479--480",
108
+ }
109
+
110
+ @inproceedings{tiedemann-2020-tatoeba,
111
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
112
+ author = {Tiedemann, J{\"o}rg},
113
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
114
+ month = nov,
115
+ year = "2020",
116
+ address = "Online",
117
+ publisher = "Association for Computational Linguistics",
118
+ url = "https://aclanthology.org/2020.wmt-1.139",
119
+ pages = "1174--1182",
120
+ }
121
+ ```
122
+
123
+ ## Model info
124
+
125
+ * Release: 2022-03-23
126
+ * source language(s): spa
127
+ * target language(s): bel rus ukr
128
+ * valid target language labels: >>bel<< >>rus<< >>ukr<<
129
+ * model: transformer-big
130
+ * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
131
+ * tokenization: SentencePiece (spm32k,spm32k)
132
+ * original model: [opusTCv20210807_transformer-big_2022-03-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zle/opusTCv20210807_transformer-big_2022-03-23.zip)
133
+ * more information released models: [OPUS-MT spa-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-zle/README.md)
134
+ * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian)
135
+
136
+ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<`
137
+
138
+ ## Usage
139
+
140
+ A short example code:
141
+
142
+ ```python
143
+ from transformers import MarianMTModel, MarianTokenizer
144
+
145
+ src_text = [
146
+ ">>rus<< Su novela se vendió bien.",
147
+ ">>ukr<< Quiero ir a Corea del Norte."
148
+ ]
149
+
150
+ model_name = "pytorch-models/opus-mt-tc-big-es-zle"
151
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
152
+ model = MarianMTModel.from_pretrained(model_name)
153
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
154
+
155
+ for t in translated:
156
+ print( tokenizer.decode(t, skip_special_tokens=True) )
157
+
158
+ # expected output:
159
+ # Его роман хорошо продавался.
160
+ # Я хочу поїхати до Північної Кореї.
161
+ ```
162
+
163
+ You can also use OPUS-MT models with the transformers pipelines, for example:
164
+
165
+ ```python
166
+ from transformers import pipeline
167
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-es-zle")
168
+ print(pipe(">>rus<< Su novela se vendió bien."))
169
+
170
+ # expected output: Его роман хорошо продавался.
171
+ ```
172
+
173
+ ## Benchmarks
174
+
175
+ * test set translations: [opusTCv20210807_transformer-big_2022-03-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zle/opusTCv20210807_transformer-big_2022-03-23.test.txt)
176
+ * test set scores: [opusTCv20210807_transformer-big_2022-03-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-zle/opusTCv20210807_transformer-big_2022-03-23.eval.txt)
177
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
178
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
179
+
180
+ | langpair | testset | chr-F | BLEU | #sent | #words |
181
+ |----------|---------|-------|-------|-------|--------|
182
+ | spa-bel | tatoeba-test-v2021-08-07 | 0.54506 | 27.5 | 205 | 1259 |
183
+ | spa-rus | tatoeba-test-v2021-08-07 | 0.68523 | 49.0 | 10506 | 69242 |
184
+ | spa-ukr | tatoeba-test-v2021-08-07 | 0.63502 | 42.3 | 10115 | 54544 |
185
+ | spa-rus | flores101-devtest | 0.49913 | 20.2 | 1012 | 23295 |
186
+ | spa-ukr | flores101-devtest | 0.47772 | 17.4 | 1012 | 22810 |
187
+ | spa-rus | newstest2012 | 0.52436 | 24.6 | 3003 | 64790 |
188
+ | spa-rus | newstest2013 | 0.54249 | 26.9 | 3000 | 58560 |
189
+
190
+ ## Acknowledgements
191
+
192
+ The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
193
+
194
+ ## Model conversion info
195
+
196
+ * transformers version: 4.16.2
197
+ * OPUS-MT git hash: 1bdabf7
198
+ * port time: Thu Mar 24 03:35:13 EET 2022
199
+ * port machine: LM0-400-22516.local
benchmark_results.txt ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ spa-bel flores101-dev 0.36345 7.4 997 23996
2
+ spa-rus flores101-dev 0.49867 20.5 997 22657
3
+ spa-ukr flores101-dev 0.47459 17.3 997 21841
4
+ spa-bel flores101-devtest 0.36639 7.7 1012 24829
5
+ spa-rus flores101-devtest 0.49913 20.2 1012 23295
6
+ spa-ukr flores101-devtest 0.47772 17.4 1012 22810
7
+ spa-rus newstest2012 0.52436 24.6 3003 64790
8
+ spa-rus newstest2013 0.54249 26.9 3000 58560
9
+ spa-rus tatoeba-test-v2020-07-28 0.68521 49.0 10000 65817
10
+ spa-ukr tatoeba-test-v2020-07-28 0.63376 42.2 10000 53833
11
+ spa-bel tatoeba-test-v2021-03-30 0.54279 27.2 207 1274
12
+ spa-rus tatoeba-test-v2021-03-30 0.68467 49.0 10272 67686
13
+ spa-ukr tatoeba-test-v2021-03-30 0.63400 42.2 10027 53966
14
+ spa-bel tatoeba-test-v2021-08-07 0.54506 27.5 205 1259
15
+ spa-rus tatoeba-test-v2021-08-07 0.68523 49.0 10506 69242
16
+ spa-ukr tatoeba-test-v2021-08-07 0.63502 42.3 10115 54544
benchmark_translations.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bec7570bf7bab57b50dc771ceeff620aac54bf886ab4d2b227d1fae5b5a8d935
3
+ size 4686272
config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_dropout": 0.0,
3
+ "activation_function": "relu",
4
+ "architectures": [
5
+ "MarianMTModel"
6
+ ],
7
+ "attention_dropout": 0.0,
8
+ "bad_words_ids": [
9
+ [
10
+ 61576
11
+ ]
12
+ ],
13
+ "bos_token_id": 0,
14
+ "classifier_dropout": 0.0,
15
+ "d_model": 1024,
16
+ "decoder_attention_heads": 16,
17
+ "decoder_ffn_dim": 4096,
18
+ "decoder_layerdrop": 0.0,
19
+ "decoder_layers": 6,
20
+ "decoder_start_token_id": 61576,
21
+ "decoder_vocab_size": 61577,
22
+ "dropout": 0.1,
23
+ "encoder_attention_heads": 16,
24
+ "encoder_ffn_dim": 4096,
25
+ "encoder_layerdrop": 0.0,
26
+ "encoder_layers": 6,
27
+ "eos_token_id": 27232,
28
+ "forced_eos_token_id": 27232,
29
+ "init_std": 0.02,
30
+ "is_encoder_decoder": true,
31
+ "max_length": 512,
32
+ "max_position_embeddings": 1024,
33
+ "model_type": "marian",
34
+ "normalize_embedding": false,
35
+ "num_beams": 4,
36
+ "num_hidden_layers": 6,
37
+ "pad_token_id": 61576,
38
+ "scale_embedding": true,
39
+ "share_encoder_decoder_embeddings": true,
40
+ "static_position_embeddings": true,
41
+ "torch_dtype": "float16",
42
+ "transformers_version": "4.18.0.dev0",
43
+ "use_cache": true,
44
+ "vocab_size": 61577
45
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88e93dc2b47d9fdccb7cb5a84241c5c744175479536909b8804e8d419917196a
3
+ size 605148995
source.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5048793a9d6399b4c9b83f566a9dc64ed17b18ca14dbfa57e309ecc13228f6b7
3
+ size 830008
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40141a59f31f3d8d0cedd7d60733baa6ee82952128d0225f638dbc61a30ec387
3
+ size 1035223
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"source_lang": "es", "target_lang": "zle", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20210807_transformer-big_2022-03-23/es-zle", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff