ihanif commited on
Commit
e778dac
1 Parent(s): 48d988c

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -23
README.md CHANGED
@@ -1,12 +1,7 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - automatic-speech-recognition
5
- - google/fleurs
6
  - generated_from_trainer
7
- - hf-asr-leaderboard
8
- - ps
9
- - Pashto
10
  datasets:
11
  - fleurs
12
  metrics:
@@ -18,15 +13,15 @@ model-index:
18
  name: Automatic Speech Recognition
19
  type: automatic-speech-recognition
20
  dataset:
21
- name: GOOGLE/FLEURS - PS_AF
22
  type: fleurs
23
  config: ps_af
24
  split: test
25
- args: 'Config: ps_af, Training split: train+validation, Eval split: test'
26
  metrics:
27
  - name: Wer
28
  type: wer
29
- value: 0.5137278308321964
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -34,11 +29,11 @@ should probably proofread and complete it, then remove this comment. -->
34
 
35
  # facebook/wav2vec2-xls-r-300m
36
 
37
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/FLEURS - PS_AF dataset.
38
  It achieves the following results on the evaluation set:
39
- - Loss: 0.9154
40
- - Wer: 0.5137
41
- - Cer: 0.1966
42
 
43
  ## Model description
44
 
@@ -66,25 +61,26 @@ The following hyperparameters were used during training:
66
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
67
  - lr_scheduler_type: linear
68
  - lr_scheduler_warmup_steps: 2000
69
- - num_epochs: 50.0
70
  - mixed_precision_training: Native AMP
71
 
72
  ### Training results
73
 
74
- | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
75
- |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
76
- | 5.0767 | 6.33 | 500 | 4.8783 | 1.0 | 1.0 |
77
- | 3.1156 | 12.66 | 1000 | 3.0990 | 1.0 | 1.0 |
78
- | 1.3506 | 18.99 | 1500 | 1.1056 | 0.7031 | 0.2889 |
79
- | 0.9997 | 25.32 | 2000 | 0.9191 | 0.5944 | 0.2301 |
80
- | 0.7838 | 31.65 | 2500 | 0.8952 | 0.5556 | 0.2152 |
81
- | 0.6665 | 37.97 | 3000 | 0.8908 | 0.5252 | 0.2017 |
82
- | 0.6265 | 44.3 | 3500 | 0.9063 | 0.5133 | 0.1954 |
 
83
 
84
 
85
  ### Framework versions
86
 
87
  - Transformers 4.26.0.dev0
88
- - Pytorch 1.13.0+cu117
89
  - Datasets 2.7.1.dev0
90
  - Tokenizers 0.13.2
 
1
  ---
2
  license: apache-2.0
3
  tags:
 
 
4
  - generated_from_trainer
 
 
 
5
  datasets:
6
  - fleurs
7
  metrics:
 
13
  name: Automatic Speech Recognition
14
  type: automatic-speech-recognition
15
  dataset:
16
+ name: fleurs
17
  type: fleurs
18
  config: ps_af
19
  split: test
20
+ args: ps_af
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 0.5156036834924966
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  # facebook/wav2vec2-xls-r-300m
31
 
32
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fleurs dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.9162
35
+ - Wer: 0.5156
36
+ - Cer: 0.1969
37
 
38
  ## Model description
39
 
 
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_steps: 2000
64
+ - training_steps: 4000
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
+ | Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
70
+ |:-------------:|:-----:|:----:|:------:|:---------------:|:------:|
71
+ | 5.0767 | 6.33 | 500 | 1.0 | 4.8783 | 1.0 |
72
+ | 3.1156 | 12.66 | 1000 | 1.0 | 3.0990 | 1.0 |
73
+ | 1.3506 | 18.99 | 1500 | 0.2889 | 1.1056 | 0.7031 |
74
+ | 0.9997 | 25.32 | 2000 | 0.2301 | 0.9191 | 0.5944 |
75
+ | 0.7838 | 31.65 | 2500 | 0.2152 | 0.8952 | 0.5556 |
76
+ | 0.6665 | 37.97 | 3000 | 0.2017 | 0.8908 | 0.5252 |
77
+ | 0.6265 | 44.3 | 3500 | 0.1954 | 0.9063 | 0.5133 |
78
+ | 0.5935 | 50.63 | 4000 | 0.9162 | 0.5156 | 0.1969 |
79
 
80
 
81
  ### Framework versions
82
 
83
  - Transformers 4.26.0.dev0
84
+ - Pytorch 1.13.1+cu117
85
  - Datasets 2.7.1.dev0
86
  - Tokenizers 0.13.2