boumehdi commited on
Commit
3ae34e5
·
verified ·
1 Parent(s): 06f6b82

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -18,23 +18,23 @@ model-index:
18
  metrics:
19
  - name: Test WER
20
  type: wer
21
- value: 0.103704
22
  ---
23
  # Wav2Vec2-Large-XLSR-53-Moroccan-Darija
24
 
25
  **wav2vec2-large-xlsr-53 new model**
26
 
27
- - Fine-tuned on 50 hours of labeled Darija Audios extracted from MDVC corpus which contains more than 1000 hours of Moroccan Darija "ary".
28
  - Fine-tuning is ongoing 24/7 to enhance accuracy.
29
  - We are consistently adding data to the model every day (We prefer not to add all MDVC Corpus at once as we are trying to standardize more and more the way we write the Moroccan Darija.
30
 
31
  <table><thead><tr><th><strong>Training Loss</strong></th> <th><strong>Validation</strong></th> <th><strong>Loss Wer</strong></th></tr></thead> <tbody><tr>
32
- <td>0.463900</td>
33
- <td>0.118950</td>
34
- <td>0.106740</td>
35
  </tr> </tbody></table>
36
 
37
- <i>- NB: The training Loss has gone higher because we increased mask_time_prob to 50%. Training is still on to decrease it.</i>
38
 
39
  ## Usage
40
 
 
18
  metrics:
19
  - name: Test WER
20
  type: wer
21
+ value: 0.099590
22
  ---
23
  # Wav2Vec2-Large-XLSR-53-Moroccan-Darija
24
 
25
  **wav2vec2-large-xlsr-53 new model**
26
 
27
+ - Fine-tuned on 51 hours of labeled Darija Audios extracted from MDVC corpus which contains more than 1000 hours of Moroccan Darija "ary".
28
  - Fine-tuning is ongoing 24/7 to enhance accuracy.
29
  - We are consistently adding data to the model every day (We prefer not to add all MDVC Corpus at once as we are trying to standardize more and more the way we write the Moroccan Darija.
30
 
31
  <table><thead><tr><th><strong>Training Loss</strong></th> <th><strong>Validation</strong></th> <th><strong>Loss Wer</strong></th></tr></thead> <tbody><tr>
32
+ <td>0.459900</td>
33
+ <td>0.117024</td>
34
+ <td>0.099590</td>
35
  </tr> </tbody></table>
36
 
37
+ <i>- PS: The training Loss has gone higher because we increased mask_time_prob to 50%. Training is still on to decrease it.</i>
38
 
39
  ## Usage
40