nrshoudi commited on
Commit
22abc03
1 Parent(s): 82a8ef9

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -31
README.md CHANGED
@@ -1,9 +1,8 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
- datasets:
6
- - common_voice
7
  model-index:
8
  - name: wav2vec2-large-xls-r-300m-Arabic-phoneme-based
9
  results: []
@@ -14,10 +13,10 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # wav2vec2-large-xls-r-300m-Arabic-phoneme-based
16
 
17
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset and local dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.7848
20
- - Per: 0.2061
21
 
22
  ## Model description
23
 
@@ -37,45 +36,55 @@ More information needed
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 0.0005
40
- - train_batch_size: 16
41
  - eval_batch_size: 6
42
  - seed: 42
43
  - gradient_accumulation_steps: 4
44
- - total_train_batch_size: 64
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 250
48
- - num_epochs: 20.0
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss | Per |
53
- |:-------------:|:-----:|:----:|:---------------:|:------:|
54
- | 3.7127 | 1.0 | 222 | 1.9854 | 1.0 |
55
- | 1.9148 | 2.0 | 445 | 1.8780 | 0.9165 |
56
- | 1.8497 | 3.0 | 667 | 1.8433 | 0.9122 |
57
- | 1.779 | 4.0 | 890 | 1.7892 | 0.9056 |
58
- | 1.7023 | 5.0 | 1112 | 1.7313 | 0.8936 |
59
- | 1.6223 | 6.0 | 1335 | 1.6278 | 0.8729 |
60
- | 1.5323 | 7.0 | 1557 | 1.4546 | 0.6137 |
61
- | 1.2216 | 8.0 | 1780 | 0.9798 | 0.3830 |
62
- | 0.8624 | 9.0 | 2002 | 0.7331 | 0.3021 |
63
- | 0.6687 | 10.0 | 2225 | 0.6287 | 0.2529 |
64
- | 0.5645 | 11.0 | 2447 | 0.5874 | 0.2290 |
65
- | 0.4973 | 12.0 | 2670 | 0.5660 | 0.2140 |
66
- | 0.4528 | 13.0 | 2892 | 0.5099 | 0.1967 |
67
- | 0.412 | 14.0 | 3115 | 0.5045 | 0.1918 |
68
- | 0.3837 | 15.0 | 3337 | 0.4800 | 0.1913 |
69
- | 0.3519 | 16.0 | 3560 | 0.4698 | 0.1827 |
70
- | 0.333 | 17.0 | 3782 | 0.4623 | 0.1802 |
71
- | 0.3137 | 18.0 | 4005 | 0.4499 | 0.1714 |
72
- | 0.297 | 19.0 | 4227 | 0.4446 | 0.1707 |
73
- | 0.2874 | 19.96 | 4440 | 0.4393 | 0.1697 |
 
 
 
 
 
 
 
 
 
 
74
 
75
 
76
  ### Framework versions
77
 
78
- - Transformers 4.30.2
79
  - Pytorch 2.0.1+cu118
80
  - Datasets 1.18.3
81
  - Tokenizers 0.13.3
 
1
  ---
2
  license: apache-2.0
3
+ base_model: facebook/wav2vec2-xls-r-300m
4
  tags:
5
  - generated_from_trainer
 
 
6
  model-index:
7
  - name: wav2vec2-large-xls-r-300m-Arabic-phoneme-based
8
  results: []
 
13
 
14
  # wav2vec2-large-xls-r-300m-Arabic-phoneme-based
15
 
16
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.7493
19
+ - Per: 0.1979
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0005
39
+ - train_batch_size: 2
40
  - eval_batch_size: 6
41
  - seed: 42
42
  - gradient_accumulation_steps: 4
43
+ - total_train_batch_size: 8
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 250
47
+ - num_epochs: 30.0
48
 
49
  ### Training results
50
 
51
+ | Training Loss | Epoch | Step | Validation Loss | Per |
52
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|
53
+ | 1.9601 | 1.0 | 2187 | 1.7221 | 0.9190 |
54
+ | 1.307 | 2.0 | 4374 | 1.0964 | 0.4532 |
55
+ | 0.9363 | 3.0 | 6561 | 0.9163 | 0.3469 |
56
+ | 0.7942 | 4.0 | 8748 | 0.8432 | 0.3037 |
57
+ | 0.7 | 5.0 | 10935 | 0.7827 | 0.2881 |
58
+ | 0.6274 | 6.0 | 13122 | 0.7456 | 0.2713 |
59
+ | 0.5692 | 7.0 | 15309 | 0.6924 | 0.2572 |
60
+ | 0.5203 | 8.0 | 17496 | 0.6521 | 0.2491 |
61
+ | 0.4853 | 9.0 | 19683 | 0.6583 | 0.2420 |
62
+ | 0.4448 | 10.0 | 21870 | 0.6580 | 0.2312 |
63
+ | 0.4134 | 11.0 | 24057 | 0.6313 | 0.2380 |
64
+ | 0.389 | 12.0 | 26244 | 0.6099 | 0.2225 |
65
+ | 0.3644 | 13.0 | 28431 | 0.6238 | 0.2239 |
66
+ | 0.3432 | 14.0 | 30618 | 0.6369 | 0.2195 |
67
+ | 0.3191 | 15.0 | 32805 | 0.6391 | 0.2164 |
68
+ | 0.2992 | 16.0 | 34992 | 0.6314 | 0.2164 |
69
+ | 0.2827 | 17.0 | 37179 | 0.6385 | 0.2143 |
70
+ | 0.2666 | 18.0 | 39366 | 0.6330 | 0.2159 |
71
+ | 0.2479 | 19.0 | 41553 | 0.6653 | 0.2125 |
72
+ | 0.2341 | 20.0 | 43740 | 0.6692 | 0.2165 |
73
+ | 0.2209 | 21.0 | 45927 | 0.6656 | 0.2199 |
74
+ | 0.2075 | 22.0 | 48114 | 0.6669 | 0.2104 |
75
+ | 0.1955 | 23.0 | 50301 | 0.6830 | 0.2044 |
76
+ | 0.1825 | 24.0 | 52488 | 0.6973 | 0.2065 |
77
+ | 0.1758 | 25.0 | 54675 | 0.7265 | 0.2013 |
78
+ | 0.1644 | 26.0 | 56862 | 0.7416 | 0.2040 |
79
+ | 0.1571 | 27.0 | 59049 | 0.7202 | 0.2007 |
80
+ | 0.1489 | 28.0 | 61236 | 0.7224 | 0.2019 |
81
+ | 0.1432 | 29.0 | 63423 | 0.7357 | 0.1988 |
82
+ | 0.1373 | 30.0 | 65610 | 0.7493 | 0.1979 |
83
 
84
 
85
  ### Framework versions
86
 
87
+ - Transformers 4.31.0
88
  - Pytorch 2.0.1+cu118
89
  - Datasets 1.18.3
90
  - Tokenizers 0.13.3