Mainak Manna commited on
Commit
b689280
1 Parent(s): 1567450

First version of the model

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ language: Deustch Spanish
4
+ tags:
5
+ - translation Deustch Spanish model
6
+ datasets:
7
+ - dcep europarl jrc-acquis
8
+ widget:
9
+ - text: "Bei einer Kombination von Artikel 124 Absatz 14 mit Artikel 136 AEUV scheint die in den Artikeln 121 und 126 AEUV"
10
+
11
+ ---
12
+
13
+ # legal_t5_small_trans_de_es_small_finetuned model
14
+
15
+ Model on translating legal text from Deustch to Spanish. It was first released in
16
+ [this repository](https://github.com/agemagician/LegalTrans). This model is first pretrained all the translation data over some unsupervised task. Then the model is trained on three parallel corpus from jrc-acquis, europarl and dcep.
17
+
18
+
19
+ ## Model description
20
+
21
+ legal_t5_small_trans_de_es_small_finetuned is initially pretrained on unsupervised task with the all of the data of the training set. The unsupervised task was "masked language modelling". legal_t5_small_trans_de_es_small_finetuned is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
22
+
23
+ ## Intended uses & limitations
24
+
25
+ The model could be used for translation of legal texts from Deustch to Spanish.
26
+
27
+ ### How to use
28
+
29
+ Here is how to use this model to translate legal text from Deustch to Spanish in PyTorch:
30
+
31
+ ```python
32
+ from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
33
+
34
+ pipeline = TranslationPipeline(
35
+ model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_trans_de_es_small_finetuned"),
36
+ tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_trans_de_es", do_lower_case=False,
37
+ skip_special_tokens=True),
38
+ device=0
39
+ )
40
+
41
+ de_text = "Bei einer Kombination von Artikel 124 Absatz 14 mit Artikel 136 AEUV scheint die in den Artikeln 121 und 126 AEUV"
42
+
43
+ pipeline([de_text], max_length=512)
44
+ ```
45
+
46
+ ## Training data
47
+
48
+ The legal_t5_small_trans_de_es_small_finetuned (the supervised task which involved only the corresponding langauge pair and as well as unsupervised task where all of the data of all language pairs were available) model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html), [EUROPARL](https://www.statmt.org/europarl/), and [DCEP](https://ec.europa.eu/jrc/en/language-technologies/dcep) dataset consisting of 8 Million parallel texts.
49
+
50
+ ## Training procedure
51
+
52
+ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
53
+
54
+ ### Preprocessing
55
+
56
+ An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
57
+
58
+ ### Pretraining
59
+
60
+ The pre-training data was the combined data from all the 42 language pairs. The task for the model was to predict the portions of a sentence which were masked randomly.
61
+
62
+
63
+ ## Evaluation results
64
+
65
+ When the model is used for translation test dataset, achieves the following results:
66
+
67
+ Test results :
68
+
69
+ | Model | BLEU score |
70
+ |:-----:|:-----:|
71
+ | legal_t5_small_trans_de_es_small_finetuned | 47.006|
72
+
73
+
74
+ ### BibTeX entry and citation info
75
+
76
+ > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)