lachkarsalim commited on
Commit
cb76a3b
1 Parent(s): 77db517

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -43
README.md CHANGED
@@ -1,60 +1,28 @@
1
  ---
2
  license: apache-2.0
3
  base_model: Helsinki-NLP/opus-mt-ar-en
4
- tags:
5
- - generated_from_trainer
6
- metrics:
7
- - bleu
8
- model-index:
9
- - name: results_arabicTranslation
10
- results: []
11
  ---
 
 
 
12
 
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
-
16
- # results_arabicTranslation
17
-
18
- This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en)
19
- It achieves the following results on the evaluation set:
20
- Epoch Training Loss Validation Loss Bleu
21
- 1 2.271900 2.034573 25.406637
22
- 2 1.854200 1.787860 20.556681
23
- 3 1.642800 1.677009 24.274589
24
- 4 1.508300 1.630295 20.556681
25
- 5 1.447700 1.615814 24.274589
26
-
27
- ## Model description
28
-
29
- More information needed
30
 
31
- ## Intended uses & limitations
32
 
33
- More information needed
34
 
35
- ## Training and evaluation data
36
 
37
- More information needed
38
 
39
- ## Training procedure
40
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
- - learning_rate: 2e-05
45
  - train_batch_size: 32
46
  - eval_batch_size: 32
47
- - seed: 42
48
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
- - lr_scheduler_type: linear
50
  - num_epochs: 5
51
- - mixed_precision_training: True FP16 enabled
52
-
53
-
54
-
55
- ### Framework versions
56
-
57
- - Transformers 4.38.2
58
- - Pytorch 2.2.2+cu121
59
- - Datasets 2.18.0
60
- - Tokenizers 0.15.2
 
1
  ---
2
  license: apache-2.0
3
  base_model: Helsinki-NLP/opus-mt-ar-en
4
+ language:
5
+ - ar
6
+ - en
7
+ pipeline_tag: translation
 
 
 
8
  ---
9
+ ---
10
+ license: apache-2.0
11
+ base_model: Helsinki-NLP/opus-mt-ar-en
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
 
14
 
15
+ # This model's role is to translate Daraija with Latin words or Arabizi into English. It was trained on 60,000 rows of translation examples.
16
 
17
+ This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on anDarija Open Dataset (DODa), an ambitious open-source project dedicated to the Moroccan dialect. With about 150,000 entries, DODa is arguably the largest open-source collaborative project for Darija <=> English translation built for Natural Language Processing purposes.
18
 
 
19
 
 
20
 
21
  ### Training hyperparameters
22
 
23
  The following hyperparameters were used during training:
24
+ - GPU : A100
25
  - train_batch_size: 32
26
  - eval_batch_size: 32
 
 
 
27
  - num_epochs: 5
28
+ - mixed_precision_training: True FP16 enabled