GeorgeSpark commited on
Commit
601abf1
1 Parent(s): 89257e9

added all the models

Browse files
README.md ADDED
@@ -0,0 +1 @@
 
 
1
+ # translation_models
compress.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gzip
2
+
3
+ def compress_file(input_path, output_path):
4
+ with open(input_path, 'rb') as file_in:
5
+ with gzip.open(output_path, 'wb') as file_out:
6
+ file_out.writelines(file_in)
7
+
8
+ # Example usage
9
+ input_file_path = 'D:\\offline-translator\\translation_models\\en_ja\\model\\model.bin'
10
+ compressed_file_path = 'D:\\offline-translator\\translation_models\\en_ja\\model\\compressed_model.bin.gz'
11
+
12
+ compress_file(input_file_path, compressed_file_path)
en_ja/README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # English-Japanese
2
+
3
+ Data compiled by [Opus](https://opus.nlpl.eu/).
4
+
5
+ Dictionary data from Wiktionary using [Wiktextract](https://github.com/tatuylonen/wiktextract).
6
+
7
+ Includes pretrained models from [Stanza](https://github.com/stanfordnlp/stanza/).
8
+
9
+ Credits:
10
+ @inproceedings{elkishky_ccaligned_2020,
11
+ author = {El-Kishky, Ahmed and Chaudhary, Vishrav and Guzmán, Francisco and Koehn, Philipp},
12
+ booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)},
13
+ month = {November},
14
+ title = {{CCAligned}: A Massive Collection of Cross-lingual Web-Document Pairs},
15
+ year = {2020}
16
+ address = "Online",
17
+ publisher = "Association for Computational Linguistics",
18
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.480",
19
+ doi = "10.18653/v1/2020.emnlp-main.480",
20
+ pages = "5960--5969"
21
+ }
22
+
23
+ Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia, arXiv, July 11 2019.
24
+
25
+ @inproceedings{elkishky_ccaligned_2020,
26
+ author = {El-Kishky, Ahmed and Chaudhary, Vishrav and Guzmán, Francisco and Koehn, Philipp},
27
+ booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)},
28
+ month = {November},
29
+ title = {{CCAligned}: A Massive Collection of Cross-lingual Web-Document Pairs},
30
+ year = {2020}
31
+ address = "Online",
32
+ publisher = "Association for Computational Linguistics",
33
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.480",
34
+ doi = "10.18653/v1/2020.emnlp-main.480",
35
+ pages = "5960--5969"
36
+ }
37
+
38
+ P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
en_ja/metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "package_version": "1.1",
3
+ "argos_version": "1.1",
4
+ "from_code": "en",
5
+ "from_name": "English",
6
+ "to_code": "ja",
7
+ "to_name": "Japanese"
8
+ }
en_ja/model/model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52a021f9d552beee5c15ec8f05e2bf72c90402ef428d58f7d625643ba2dc1780
3
+ size 132573333
en_ja/model/shared_vocabulary.txt ADDED
The diff for this file is too large to render. See raw diff
 
en_ja/sentencepiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33de6356768ef5eaca1a41883b1990273093ffbe70720083502cb1a975b59b3d
3
+ size 787500
en_ja/stanza/en/tokenize/ewt.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bc5a1523f4e60107640ea44d2cb27c787bc339ba0c93be6c7dbf744d5635cd6
3
+ size 630886
en_ja/stanza/resources.json ADDED
The diff for this file is too large to render. See raw diff
 
en_nl/README.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # English-Dutch
2
+
3
+ Data compiled by [Opus](https://opus.nlpl.eu/).
4
+
5
+ Dictionary data from Wiktionary using [Wiktextract](https://github.com/tatuylonen/wiktextract).
6
+
7
+ Includes pretrained models from [Stanza](https://github.com/stanfordnlp/stanza/).
8
+
9
+ Credits:
10
+ Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia, arXiv, July 11 2019.
11
+
12
+ P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
13
+
en_nl/metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "package_version": "1.4",
3
+ "argos_version": "1.4",
4
+ "from_code": "en",
5
+ "from_name": "English",
6
+ "to_code": "nl",
7
+ "to_name": "Dutch"
8
+ }
en_nl/model/model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1603d51f61c0066c08b628696fe470ae3c8aec27f487a73e907b197fdced3ac2
3
+ size 93102955
en_nl/model/shared_vocabulary.txt ADDED
The diff for this file is too large to render. See raw diff
 
en_nl/sentencepiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d5f9332295042f8a4481b11c966ede3c1a29a9358b2e3c6f0ee88b295bd3f21
3
+ size 370128
en_nl/stanza/en/tokenize/ewt.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bc5a1523f4e60107640ea44d2cb27c787bc339ba0c93be6c7dbf744d5635cd6
3
+ size 630886
en_nl/stanza/resources.json ADDED
The diff for this file is too large to render. See raw diff
 
en_zh/README.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # English-Chinese
2
+
3
+ Data compiled by [Opus](https://opus.nlpl.eu/).
4
+
5
+ Dictionary data from Wiktionary using [Wiktextract](https://github.com/tatuylonen/wiktextract).
6
+
7
+ Includes pretrained models from [Stanza](https://github.com/stanfordnlp/stanza/).
8
+
9
+ Credits:
10
+ Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia, arXiv, July 11 2019.
11
+
12
+ J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
13
+
14
+ OPUS and A massively parallel corpus: the Bible in 100 languages, Christos Christodoulopoulos and Mark Steedman, *Language Resources and Evaluation*, 49 (2)
15
+
16
+ Ziemski, M., Junczys-Dowmunt, M., and Pouliquen, B., (2016), The United Nations Parallel Corpus, Language Resources and Evaluation (LREC’16), Portorož, Slovenia, May 2016.
17
+
18
+ The following disclaimer, an integral part of the United Nations Parallel Corpus, shall be respected with regard to the Corpus (no other restrictions apply):
19
+ - The United Nations Parallel Corpus is made available without warranty of any kind, explicit or implied. The United Nations specifically makes no warranties or representations as to the accuracy or completeness of the information contained in the United Nations Corpus.
20
+ - Under no circumstances shall the United Nations be liable for any loss, liability, injury or damage incurred or suffered that is claimed to have resulted from the use of the United Nations Corpus. The use of the United Nations Corpus is at the user's sole risk. The user specifically acknowledges and agrees that the United Nations is not liable for the conduct of any user. If the user is dissatisfied with any of the material provided in the United Nations Corpus, the user's sole and exclusive remedy is to discontinue using the United Nations Corpus.
21
+ - When using the United Nations Corpus, the user must acknowledge the United Nations as the source of the information. For references, please cite this reference: Ziemski, M., Junczys-Dowmunt, M., and Pouliquen, B., (2016), The United Nations Parallel Corpus, Language Resources and Evaluation (LREC’16), Portorož, Slovenia, May 2016.
22
+ - Nothing herein shall constitute or be considered to be a limitation upon or waiver, express or implied, of the privileges and immunities of the United Nations, which are specifically reserved.
23
+
en_zh/metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "package_version": "1.1",
3
+ "argos_version": "1.1",
4
+ "from_code": "en",
5
+ "from_name": "English",
6
+ "to_code": "zh",
7
+ "to_name": "Chinese"
8
+ }
en_zh/model/model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c37af5f22e9ae00d2a46f66a642600bb0649e33544e71305b509ed22bdbf752e
3
+ size 132573333
en_zh/model/shared_vocabulary.txt ADDED
The diff for this file is too large to render. See raw diff
 
en_zh/sentencepiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25606a42ca9cd80958763a188a6b3636dcf0b83195cf764a8dd1008462dd15ff
3
+ size 798584
en_zh/stanza/en/tokenize/ewt.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bc5a1523f4e60107640ea44d2cb27c787bc339ba0c93be6c7dbf744d5635cd6
3
+ size 630886
en_zh/stanza/resources.json ADDED
The diff for this file is too large to render. See raw diff
 
ja_en/README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # English-Japanese
2
+
3
+ Data compiled by [Opus](https://opus.nlpl.eu/).
4
+
5
+ Dictionary data from Wiktionary using [Wiktextract](https://github.com/tatuylonen/wiktextract).
6
+
7
+ Includes pretrained models from [Stanza](https://github.com/stanfordnlp/stanza/).
8
+
9
+ Credits:
10
+ @inproceedings{elkishky_ccaligned_2020,
11
+ author = {El-Kishky, Ahmed and Chaudhary, Vishrav and Guzmán, Francisco and Koehn, Philipp},
12
+ booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)},
13
+ month = {November},
14
+ title = {{CCAligned}: A Massive Collection of Cross-lingual Web-Document Pairs},
15
+ year = {2020}
16
+ address = "Online",
17
+ publisher = "Association for Computational Linguistics",
18
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.480",
19
+ doi = "10.18653/v1/2020.emnlp-main.480",
20
+ pages = "5960--5969"
21
+ }
22
+
23
+ Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia, arXiv, July 11 2019.
24
+
25
+ @inproceedings{elkishky_ccaligned_2020,
26
+ author = {El-Kishky, Ahmed and Chaudhary, Vishrav and Guzmán, Francisco and Koehn, Philipp},
27
+ booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)},
28
+ month = {November},
29
+ title = {{CCAligned}: A Massive Collection of Cross-lingual Web-Document Pairs},
30
+ year = {2020}
31
+ address = "Online",
32
+ publisher = "Association for Computational Linguistics",
33
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.480",
34
+ doi = "10.18653/v1/2020.emnlp-main.480",
35
+ pages = "5960--5969"
36
+ }
37
+
38
+ P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
ja_en/metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "package_version": "1.1",
3
+ "argos_version": "1.1",
4
+ "from_code": "ja",
5
+ "from_name": "Japanese",
6
+ "to_code": "en",
7
+ "to_name": "English"
8
+ }
ja_en/model/model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d2ac0b0e427aa368b635812853a70b09976d2318bf889a9327488968b501c83
3
+ size 132573333
ja_en/model/shared_vocabulary.txt ADDED
The diff for this file is too large to render. See raw diff
 
ja_en/sentencepiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af3f880c259f55c62ed6c77c1ebc3cf46d6a95cc7cd8e9fcc78ee75417b987ef
3
+ size 787932
ja_en/stanza/ja/tokenize/gsd.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9eb95d21cbb21a4d00be597b769345315efae1721d38fd93cee80b7ee3d7783
3
+ size 1041697
ja_en/stanza/resources.json ADDED
The diff for this file is too large to render. See raw diff
 
nl_en/README.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dutch-English
2
+
3
+ Data compiled by [Opus](https://opus.nlpl.eu/).
4
+
5
+ Dictionary data from Wiktionary using [Wiktextract](https://github.com/tatuylonen/wiktextract).
6
+
7
+ Includes pretrained models from [Stanza](https://github.com/stanfordnlp/stanza/).
8
+
9
+ Credits:
10
+ Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia, arXiv, July 11 2019.
11
+
12
+ P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
13
+
nl_en/metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "package_version": "1.4",
3
+ "argos_version": "1.4",
4
+ "from_code": "nl",
5
+ "from_name": "Dutch",
6
+ "to_code": "en",
7
+ "to_name": "English"
8
+ }
nl_en/model/model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1151c63e6eeb14c804512ff036f30723cb41b71c08524876d2fdac79fe8e48b
3
+ size 93102955
nl_en/model/shared_vocabulary.txt ADDED
The diff for this file is too large to render. See raw diff
 
nl_en/sentencepiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c3b21360aa8e1aeabc3a828d73f0715468336859a2ccb7d929fdffb487a6c43
3
+ size 369985
nl_en/stanza/nl/tokenize/alpino.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb731723a40045eeba290186d7bde1cb3a3515ea6b05877da11f1772a690e781
3
+ size 629305
nl_en/stanza/resources.json ADDED
The diff for this file is too large to render. See raw diff
 
zh_en/README.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Chinese-English
2
+
3
+ Data compiled by [Opus](https://opus.nlpl.eu/).
4
+
5
+ Dictionary data from Wiktionary using [Wiktextract](https://github.com/tatuylonen/wiktextract).
6
+
7
+ Includes pretrained models from [Stanza](https://github.com/stanfordnlp/stanza/).
8
+
9
+ Credits:
10
+ Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia, arXiv, July 11 2019.
11
+
12
+ J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
13
+
14
+ OPUS and A massively parallel corpus: the Bible in 100 languages, Christos Christodoulopoulos and Mark Steedman, *Language Resources and Evaluation*, 49 (2)
15
+
16
+ Ziemski, M., Junczys-Dowmunt, M., and Pouliquen, B., (2016), The United Nations Parallel Corpus, Language Resources and Evaluation (LREC’16), Portorož, Slovenia, May 2016.
17
+
18
+ The following disclaimer, an integral part of the United Nations Parallel Corpus, shall be respected with regard to the Corpus (no other restrictions apply):
19
+ - The United Nations Parallel Corpus is made available without warranty of any kind, explicit or implied. The United Nations specifically makes no warranties or representations as to the accuracy or completeness of the information contained in the United Nations Corpus.
20
+ - Under no circumstances shall the United Nations be liable for any loss, liability, injury or damage incurred or suffered that is claimed to have resulted from the use of the United Nations Corpus. The use of the United Nations Corpus is at the user's sole risk. The user specifically acknowledges and agrees that the United Nations is not liable for the conduct of any user. If the user is dissatisfied with any of the material provided in the United Nations Corpus, the user's sole and exclusive remedy is to discontinue using the United Nations Corpus.
21
+ - When using the United Nations Corpus, the user must acknowledge the United Nations as the source of the information. For references, please cite this reference: Ziemski, M., Junczys-Dowmunt, M., and Pouliquen, B., (2016), The United Nations Parallel Corpus, Language Resources and Evaluation (LREC’16), Portorož, Slovenia, May 2016.
22
+ - Nothing herein shall constitute or be considered to be a limitation upon or waiver, express or implied, of the privileges and immunities of the United Nations, which are specifically reserved.
23
+
zh_en/metadata.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "package_version": "1.0",
3
+ "argos_version": "1.1",
4
+ "from_code": "zh",
5
+ "from_name": "Chinese",
6
+ "to_code": "en",
7
+ "to_name": "English"
8
+ }
zh_en/model/model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f1283c535d076a531fb3fc5cb657fc62305ae75210ac120b7c7375a85451e3d
3
+ size 132573333
zh_en/model/shared_vocabulary.txt ADDED
The diff for this file is too large to render. See raw diff
 
zh_en/sentencepiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c17468f4fe8d8833af571a04556c4348f99b09b4f35e3cf3b0f6419726f6654
3
+ size 798567
zh_en/stanza/resources.json ADDED
The diff for this file is too large to render. See raw diff
 
zh_en/stanza/zh-hans/tokenize/gsdsimp.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37cfdbc78f972a6950c66719b363c80ad148ff6b57992c960a0309224e1c87fc
3
+ size 1128796