ihanif commited on
Commit
0deb8f5
·
1 Parent(s): e3d72ca

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ps_af
4
+ license: apache-2.0
5
+ tags:
6
+ - whisper-event
7
+ - generated_from_trainer
8
+ datasets:
9
+ - google/fleurs
10
+ metrics:
11
+ - wer
12
+ model-index:
13
+ - name: Whisper Small Pashto - Augmented
14
+ results:
15
+ - task:
16
+ name: Automatic Speech Recognition
17
+ type: automatic-speech-recognition
18
+ dataset:
19
+ name: google/fleurs
20
+ type: google/fleurs
21
+ config: null
22
+ split: None
23
+ metrics:
24
+ - name: Wer
25
+ type: wer
26
+ value: 53.62439467312349
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # Whisper Small Pashto - Augmented
33
+
34
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: 0.6979
37
+ - Wer: 53.6244
38
+ - Cer: 22.6847
39
+
40
+ ## Model description
41
+
42
+ More information needed
43
+
44
+ ## Intended uses & limitations
45
+
46
+ More information needed
47
+
48
+ ## Training and evaluation data
49
+
50
+ More information needed
51
+
52
+ ## Training procedure
53
+
54
+ ### Training hyperparameters
55
+
56
+ The following hyperparameters were used during training:
57
+ - learning_rate: 1e-05
58
+ - train_batch_size: 2
59
+ - eval_batch_size: 2
60
+ - seed: 42
61
+ - gradient_accumulation_steps: 16
62
+ - total_train_batch_size: 32
63
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
+ - lr_scheduler_type: linear
65
+ - lr_scheduler_warmup_steps: 30
66
+ - training_steps: 300
67
+ - mixed_precision_training: Native AMP
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
72
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
73
+ | 0.9683 | 1.19 | 100 | 0.8812 | 139.3765 | 131.6166 |
74
+ | 0.6848 | 2.38 | 200 | 0.7543 | 145.9973 | 151.3369 |
75
+ | 0.5548 | 3.57 | 300 | 0.6979 | 53.6244 | 22.6847 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.26.0.dev0
81
+ - Pytorch 1.13.1+cu117
82
+ - Datasets 2.8.1.dev0
83
+ - Tokenizers 0.13.2