BounharAbdelaziz commited on
Commit
5faf25b
1 Parent(s): e05df3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -47
README.md CHANGED
@@ -1,47 +1,41 @@
1
- ---
2
- tags:
3
- - generated_from_trainer
4
- model-index:
5
- - name: Terjman-Large-v2
6
- results: []
7
- ---
8
-
9
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
- should probably proofread and complete it, then remove this comment. -->
11
-
12
- # Terjman-Large-v2
13
-
14
- This model was trained from scratch on an unknown dataset.
15
-
16
- ## Model description
17
-
18
- More information needed
19
-
20
- ## Intended uses & limitations
21
-
22
- More information needed
23
-
24
- ## Training and evaluation data
25
-
26
- More information needed
27
-
28
- ## Training procedure
29
-
30
- ### Training hyperparameters
31
-
32
- The following hyperparameters were used during training:
33
- - learning_rate: 3e-05
34
- - train_batch_size: 96
35
- - eval_batch_size: 96
36
- - seed: 42
37
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
38
- - lr_scheduler_type: linear
39
- - lr_scheduler_warmup_ratio: 0.03
40
- - num_epochs: 150
41
-
42
- ### Framework versions
43
-
44
- - Transformers 4.39.2
45
- - Pytorch 2.2.2+cpu
46
- - Datasets 2.18.0
47
- - Tokenizers 0.15.2
 
1
+ ---
2
+ ---
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: results
7
+ results: []
8
+ datasets:
9
+ - BounharAbdelaziz/English-to-Moroccan-Darija
10
+ language:
11
+ - ar
12
+ - en
13
+ metrics:
14
+ - bleu
15
+ ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+ # Terjman-Large-v2
21
+
22
+ This is a translation model from English to Moroccan darija. It is a finetuned version of "Helsinki-NLP/opus-mt-tc-big-en-ar" on the "BounharAbdelaziz/English-to-Moroccan-Darija" dataset.
23
+
24
+ ### Training hyperparameters
25
+
26
+ The following hyperparameters were used during training:
27
+ - learning_rate: 3e-05
28
+ - train_batch_size: 96
29
+ - eval_batch_size: 96
30
+ - seed: 42
31
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
32
+ - lr_scheduler_type: linear
33
+ - lr_scheduler_warmup_ratio: 0.03
34
+ - num_epochs: 150
35
+
36
+ ### Framework versions
37
+
38
+ - Transformers 4.39.2
39
+ - Pytorch 2.2.2+cpu
40
+ - Datasets 2.18.0
41
+ - Tokenizers 0.15.2