tiedeman commited on
Commit
9b9fbda
1 Parent(s): 0a21ca0

Initial commit

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.spm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - be
5
+ - bg
6
+ - bs
7
+ - cs
8
+ - csb
9
+ - cu
10
+ - de
11
+ - dsb
12
+ - en
13
+ - es
14
+ - fr
15
+ - hr
16
+ - hsb
17
+ - mk
18
+ - orv
19
+ - pl
20
+ - pt
21
+ - ru
22
+ - rue
23
+ - sh
24
+ - sk
25
+ - sl
26
+ - sr
27
+ - szl
28
+ - uk
29
+
30
+ tags:
31
+ - translation
32
+ - opus-mt-tc-bible
33
+
34
+ license: apache-2.0
35
+ model-index:
36
+ - name: opus-mt-tc-bible-big-deu_eng_fra_por_spa-sla
37
+ results:
38
+ - task:
39
+ name: Translation multi-multi
40
+ type: translation
41
+ args: multi-multi
42
+ dataset:
43
+ name: tatoeba-test-v2020-07-28-v2023-09-26
44
+ type: tatoeba_mt
45
+ args: multi-multi
46
+ metrics:
47
+ - name: BLEU
48
+ type: bleu
49
+ value: 43.8
50
+ - name: chr-F
51
+ type: chrf
52
+ value: 0.64962
53
+ ---
54
+ # opus-mt-tc-bible-big-deu_eng_fra_por_spa-sla
55
+
56
+ ## Table of Contents
57
+ - [Model Details](#model-details)
58
+ - [Uses](#uses)
59
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
60
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
61
+ - [Training](#training)
62
+ - [Evaluation](#evaluation)
63
+ - [Citation Information](#citation-information)
64
+ - [Acknowledgements](#acknowledgements)
65
+
66
+ ## Model Details
67
+
68
+ Neural machine translation model for translating from unknown (deu+eng+fra+por+spa) to Slavic languages (sla).
69
+
70
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
71
+ **Model Description:**
72
+ - **Developed by:** Language Technology Research Group at the University of Helsinki
73
+ - **Model Type:** Translation (transformer-big)
74
+ - **Release**: 2024-05-30
75
+ - **License:** Apache-2.0
76
+ - **Language(s):**
77
+ - Source Language(s): deu eng fra por spa
78
+ - Target Language(s): bel bos bul ces chu cnr csb dsb hbs hrv hsb mkd orv pol rue rus slk slv srp szl ukr
79
+ - Valid Target Language Labels: >>bel<< >>bos_Cyrl<< >>bos_Latn<< >>bul<< >>ces<< >>chu<< >>cnr<< >>cnr_Latn<< >>csb<< >>csb_Latn<< >>czk<< >>dsb<< >>hbs<< >>hbs_Cyrl<< >>hbs_Latn<< >>hrv<< >>hsb<< >>kjv<< >>mkd<< >>orv<< >>orv_Cyrl<< >>pol<< >>pox<< >>rue<< >>rus<< >>slk<< >>slv<< >>srp_Cyrl<< >>svm<< >>szl<< >>ukr<<
80
+ - **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-sla/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
81
+ - **Resources for more information:**
82
+ - [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-sla/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
83
+ - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
84
+ - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
85
+ - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
86
+ - [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
87
+ - [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
88
+
89
+ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>bel<<`
90
+
91
+ ## Uses
92
+
93
+ This model can be used for translation and text-to-text generation.
94
+
95
+ ## Risks, Limitations and Biases
96
+
97
+ **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
98
+
99
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
100
+
101
+ ## How to Get Started With the Model
102
+
103
+ A short example code:
104
+
105
+ ```python
106
+ from transformers import MarianMTModel, MarianTokenizer
107
+
108
+ src_text = [
109
+ ">>bel<< Replace this with text in an accepted source language.",
110
+ ">>ukr<< This is the second sentence."
111
+ ]
112
+
113
+ model_name = "pytorch-models/opus-mt-tc-bible-big-deu_eng_fra_por_spa-sla"
114
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
115
+ model = MarianMTModel.from_pretrained(model_name)
116
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
117
+
118
+ for t in translated:
119
+ print( tokenizer.decode(t, skip_special_tokens=True) )
120
+ ```
121
+
122
+ You can also use OPUS-MT models with the transformers pipelines, for example:
123
+
124
+ ```python
125
+ from transformers import pipeline
126
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-sla")
127
+ print(pipe(">>bel<< Replace this with text in an accepted source language."))
128
+ ```
129
+
130
+ ## Training
131
+
132
+ - **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
133
+ - **Pre-processing**: SentencePiece (spm32k,spm32k)
134
+ - **Model Type:** transformer-big
135
+ - **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-sla/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
136
+ - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
137
+
138
+ ## Evaluation
139
+
140
+ * [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-sla/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
141
+ * test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-sla/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt)
142
+ * test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-sla/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt)
143
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
144
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
145
+
146
+ | langpair | testset | chr-F | BLEU | #sent | #words |
147
+ |----------|---------|-------|-------|-------|--------|
148
+ | multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.64962 | 43.8 | 10000 | 64735 |
149
+
150
+ ## Citation Information
151
+
152
+ * Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
153
+
154
+ ```bibtex
155
+ @article{tiedemann2023democratizing,
156
+ title={Democratizing neural machine translation with {OPUS-MT}},
157
+ author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
158
+ journal={Language Resources and Evaluation},
159
+ number={58},
160
+ pages={713--755},
161
+ year={2023},
162
+ publisher={Springer Nature},
163
+ issn={1574-0218},
164
+ doi={10.1007/s10579-023-09704-w}
165
+ }
166
+
167
+ @inproceedings{tiedemann-thottingal-2020-opus,
168
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
169
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
170
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
171
+ month = nov,
172
+ year = "2020",
173
+ address = "Lisboa, Portugal",
174
+ publisher = "European Association for Machine Translation",
175
+ url = "https://aclanthology.org/2020.eamt-1.61",
176
+ pages = "479--480",
177
+ }
178
+
179
+ @inproceedings{tiedemann-2020-tatoeba,
180
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
181
+ author = {Tiedemann, J{\"o}rg},
182
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
183
+ month = nov,
184
+ year = "2020",
185
+ address = "Online",
186
+ publisher = "Association for Computational Linguistics",
187
+ url = "https://aclanthology.org/2020.wmt-1.139",
188
+ pages = "1174--1182",
189
+ }
190
+ ```
191
+
192
+ ## Acknowledgements
193
+
194
+ The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
195
+
196
+ ## Model conversion info
197
+
198
+ * transformers version: 4.45.1
199
+ * OPUS-MT git hash: 0882077
200
+ * port time: Tue Oct 8 10:35:19 EEST 2024
201
+ * port machine: LM0-400-22516.local
benchmark_results.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ multi-multi tatoeba-test-v2020-07-28-v2023-09-26 0.64962 43.8 10000 64735
benchmark_translations.zip ADDED
File without changes
config.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "pytorch-models/opus-mt-tc-bible-big-deu_eng_fra_por_spa-sla",
3
+ "activation_dropout": 0.0,
4
+ "activation_function": "relu",
5
+ "architectures": [
6
+ "MarianMTModel"
7
+ ],
8
+ "attention_dropout": 0.0,
9
+ "bos_token_id": 0,
10
+ "classifier_dropout": 0.0,
11
+ "d_model": 1024,
12
+ "decoder_attention_heads": 16,
13
+ "decoder_ffn_dim": 4096,
14
+ "decoder_layerdrop": 0.0,
15
+ "decoder_layers": 6,
16
+ "decoder_start_token_id": 59955,
17
+ "decoder_vocab_size": 59956,
18
+ "dropout": 0.1,
19
+ "encoder_attention_heads": 16,
20
+ "encoder_ffn_dim": 4096,
21
+ "encoder_layerdrop": 0.0,
22
+ "encoder_layers": 6,
23
+ "eos_token_id": 501,
24
+ "forced_eos_token_id": null,
25
+ "init_std": 0.02,
26
+ "is_encoder_decoder": true,
27
+ "max_length": null,
28
+ "max_position_embeddings": 1024,
29
+ "model_type": "marian",
30
+ "normalize_embedding": false,
31
+ "num_beams": null,
32
+ "num_hidden_layers": 6,
33
+ "pad_token_id": 59955,
34
+ "scale_embedding": true,
35
+ "share_encoder_decoder_embeddings": true,
36
+ "static_position_embeddings": true,
37
+ "torch_dtype": "float32",
38
+ "transformers_version": "4.45.1",
39
+ "use_cache": true,
40
+ "vocab_size": 59956
41
+ }
generation_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bad_words_ids": [
4
+ [
5
+ 59955
6
+ ]
7
+ ],
8
+ "bos_token_id": 0,
9
+ "decoder_start_token_id": 59955,
10
+ "eos_token_id": 501,
11
+ "forced_eos_token_id": 501,
12
+ "max_length": 512,
13
+ "num_beams": 4,
14
+ "pad_token_id": 59955,
15
+ "transformers_version": "4.45.1"
16
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9e4ce248f51f5f35a205c141aca5eff99b4984ab3a95b6f2b90ea7a9ce24a80
3
+ size 951278720
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c560ec106357aad4181c98f9e3d79896727fbc4288aacc5f415f616b037f4f05
3
+ size 951329989
source.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:710f4b4204f788939c97981c12dfb0a6483d256449767b503048475284a30517
3
+ size 807444
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64a0c7c2d136fb57060c642e898f57c1dbd77c233a30612ddfc888488f1608dc
3
+ size 859401
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"source_lang": "deu+eng+fra+por+spa", "target_lang": "sla", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30/deu+eng+fra+por+spa-sla", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff