tiedeman commited on
Commit
5af4b91
1 Parent(s): 0a95843

Initial commit

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.spm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,221 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - af
5
+ - ang
6
+ - bar
7
+ - bi
8
+ - bzj
9
+ - de
10
+ - djk
11
+ - drt
12
+ - en
13
+ - enm
14
+ - frr
15
+ - fy
16
+ - gos
17
+ - gsw
18
+ - hrx
19
+ - hwc
20
+ - icr
21
+ - jam
22
+ - kri
23
+ - ksh
24
+ - lb
25
+ - li
26
+ - nds
27
+ - nl
28
+ - ofs
29
+ - pcm
30
+ - pdc
31
+ - pfl
32
+ - pih
33
+ - pis
34
+ - rop
35
+ - sco
36
+ - srm
37
+ - srn
38
+ - stq
39
+ - swg
40
+ - tcs
41
+ - tpi
42
+ - vls
43
+ - wae
44
+ - yi
45
+ - zea
46
+
47
+ tags:
48
+ - translation
49
+ - opus-mt-tc-bible
50
+
51
+ license: apache-2.0
52
+ model-index:
53
+ - name: opus-mt-tc-bible-big-gmw-en
54
+ results:
55
+ - task:
56
+ name: Translation multi-eng
57
+ type: translation
58
+ args: multi-eng
59
+ dataset:
60
+ name: tatoeba-test-v2020-07-28-v2023-09-26
61
+ type: tatoeba_mt
62
+ args: multi-eng
63
+ metrics:
64
+ - name: BLEU
65
+ type: bleu
66
+ value: 52.6
67
+ - name: chr-F
68
+ type: chrf
69
+ value: 0.70028
70
+ ---
71
+ # opus-mt-tc-bible-big-gmw-en
72
+
73
+ ## Table of Contents
74
+ - [Model Details](#model-details)
75
+ - [Uses](#uses)
76
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
77
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
78
+ - [Training](#training)
79
+ - [Evaluation](#evaluation)
80
+ - [Citation Information](#citation-information)
81
+ - [Acknowledgements](#acknowledgements)
82
+
83
+ ## Model Details
84
+
85
+ Neural machine translation model for translating from West Germanic languages (gmw) to English (en).
86
+
87
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
88
+ **Model Description:**
89
+ - **Developed by:** Language Technology Research Group at the University of Helsinki
90
+ - **Model Type:** Translation (transformer-big)
91
+ - **Release**: 2024-08-17
92
+ - **License:** Apache-2.0
93
+ - **Language(s):**
94
+ - Source Language(s): afr ang bar bis bzj deu djk drt eng enm frr fry gos gsw hrx hwc icr jam kri ksh lim ltz nds nld ofs pcm pdc pfl pih pis rop sco srm srn stq swg tcs tpi vls wae yid zea
95
+ - Target Language(s): eng
96
+ - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
97
+ - **Resources for more information:**
98
+ - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/gmw-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
99
+ - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
100
+ - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
101
+ - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
102
+ - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
103
+ - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
104
+
105
+ ## Uses
106
+
107
+ This model can be used for translation and text-to-text generation.
108
+
109
+ ## Risks, Limitations and Biases
110
+
111
+ **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
112
+
113
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
114
+
115
+ ## How to Get Started With the Model
116
+
117
+ A short example code:
118
+
119
+ ```python
120
+ from transformers import MarianMTModel, MarianTokenizer
121
+
122
+ src_text = [
123
+ "Wir müssen in Erfahrung bringen, wann Tom hierzusein gedenkt.",
124
+ "Tom said he didn't see anybody."
125
+ ]
126
+
127
+ model_name = "pytorch-models/opus-mt-tc-bible-big-gmw-en"
128
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
129
+ model = MarianMTModel.from_pretrained(model_name)
130
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
131
+
132
+ for t in translated:
133
+ print( tokenizer.decode(t, skip_special_tokens=True) )
134
+
135
+ # expected output:
136
+ # We need to find out when Tom remembers this.
137
+ # - Tom said he didn't see anybody.
138
+ ```
139
+
140
+ You can also use OPUS-MT models with the transformers pipelines, for example:
141
+
142
+ ```python
143
+ from transformers import pipeline
144
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-gmw-en")
145
+ print(pipe("Wir müssen in Erfahrung bringen, wann Tom hierzusein gedenkt."))
146
+
147
+ # expected output: We need to find out when Tom remembers this.
148
+ ```
149
+
150
+ ## Training
151
+
152
+ - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
153
+ - **Pre-processing**: SentencePiece (spm32k,spm32k)
154
+ - **Model Type:** transformer-big
155
+ - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.zip)
156
+ - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
157
+
158
+ ## Evaluation
159
+
160
+ * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/gmw-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
161
+ * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt)
162
+ * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/gmw-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt)
163
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
164
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
165
+
166
+ | langpair | testset | chr-F | BLEU | #sent | #words |
167
+ |----------|---------|-------|-------|-------|--------|
168
+ | multi-eng | tatoeba-test-v2020-07-28-v2023-09-26 | 0.70028 | 52.6 | 10000 | 84720 |
169
+
170
+ ## Citation Information
171
+
172
+ * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
173
+
174
+ ```bibtex
175
+ @article{tiedemann2023democratizing,
176
+ title={Democratizing neural machine translation with {OPUS-MT}},
177
+ author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
178
+ journal={Language Resources and Evaluation},
179
+ number={58},
180
+ pages={713--755},
181
+ year={2023},
182
+ publisher={Springer Nature},
183
+ issn={1574-0218},
184
+ doi={10.1007/s10579-023-09704-w}
185
+ }
186
+
187
+ @inproceedings{tiedemann-thottingal-2020-opus,
188
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
189
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
190
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
191
+ month = nov,
192
+ year = "2020",
193
+ address = "Lisboa, Portugal",
194
+ publisher = "European Association for Machine Translation",
195
+ url = "https://aclanthology.org/2020.eamt-1.61",
196
+ pages = "479--480",
197
+ }
198
+
199
+ @inproceedings{tiedemann-2020-tatoeba,
200
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
201
+ author = {Tiedemann, J{\"o}rg},
202
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
203
+ month = nov,
204
+ year = "2020",
205
+ address = "Online",
206
+ publisher = "Association for Computational Linguistics",
207
+ url = "https://aclanthology.org/2020.wmt-1.139",
208
+ pages = "1174--1182",
209
+ }
210
+ ```
211
+
212
+ ## Acknowledgements
213
+
214
+ The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
215
+
216
+ ## Model conversion info
217
+
218
+ * transformers version: 4.45.1
219
+ * OPUS-MT git hash: 0882077
220
+ * port time: Tue Oct 8 11:25:42 EEST 2024
221
+ * port machine: LM0-400-22516.local
benchmark_results.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ multi-eng tatoeba-test-v2020-07-28-v2023-09-26 0.70028 52.6 10000 84720
benchmark_translations.zip ADDED
File without changes
config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "pytorch-models/opus-mt-tc-bible-big-gmw-en",
3
+ "activation_dropout": 0.0,
4
+ "activation_function": "relu",
5
+ "architectures": [
6
+ "MarianMTModel"
7
+ ],
8
+ "attention_dropout": 0.0,
9
+ "bos_token_id": 0,
10
+ "classifier_dropout": 0.0,
11
+ "d_model": 1024,
12
+ "decoder_attention_heads": 16,
13
+ "decoder_ffn_dim": 4096,
14
+ "decoder_layerdrop": 0.0,
15
+ "decoder_layers": 6,
16
+ "decoder_start_token_id": 54701,
17
+ "decoder_vocab_size": 54702,
18
+ "dropout": 0.1,
19
+ "encoder_attention_heads": 16,
20
+ "encoder_ffn_dim": 4096,
21
+ "encoder_layerdrop": 0.0,
22
+ "encoder_layers": 6,
23
+ "eos_token_id": 733,
24
+ "forced_eos_token_id": null,
25
+ "init_std": 0.02,
26
+ "is_encoder_decoder": true,
27
+ "max_length": null,
28
+ "max_position_embeddings": 1024,
29
+ "model_type": "marian",
30
+ "normalize_embedding": false,
31
+ "num_beams": null,
32
+ "num_hidden_layers": 6,
33
+ "pad_token_id": 54701,
34
+ "scale_embedding": true,
35
+ "share_encoder_decoder_embeddings": true,
36
+ "static_position_embeddings": true,
37
+ "torch_dtype": "float32",
38
+ "transformers_version": "4.45.1",
39
+ "use_cache": true,
40
+ "vocab_size": 54702
41
+ }
generation_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bad_words_ids": [
4
+ [
5
+ 54701
6
+ ]
7
+ ],
8
+ "bos_token_id": 0,
9
+ "decoder_start_token_id": 54701,
10
+ "eos_token_id": 733,
11
+ "forced_eos_token_id": 733,
12
+ "max_length": 512,
13
+ "num_beams": 4,
14
+ "pad_token_id": 54701,
15
+ "transformers_version": "4.45.1"
16
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:084c9f253c0c2eebb5c587e943e80cab5e9fe75f1753123353928d6e823f223b
3
+ size 929737320
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b636f1706943bf0b0c9a794f49f35daf7839753c61491ab849877311fb64553f
3
+ size 929788549
source.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3316b51584d8dbea9d6f82d2981dee728c758edfcc96298c1df8afc383ff7ed8
3
+ size 795564
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a63223a0526df5a86520746b752c6a95a73d513e860715d18169cdb294ee726
3
+ size 795314
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"source_lang": "gmw", "target_lang": "en", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17/gmw-en", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff