aadel4 commited on
Commit
c07879e
1 Parent(s): 1c95976

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ metrics:
4
+ - wer
5
+ model-index:
6
+ - name: openai/whisper-small
7
+ results:
8
+ - task:
9
+ type: automatic-speech-recognition
10
+ name: Automatic Speech Recognition
11
+ dataset:
12
+ name: myst-test
13
+ type: asr
14
+ config: en
15
+ split: test
16
+ metrics:
17
+ - type: wer
18
+ value: 11.80
19
+ name: WER
20
+ - task:
21
+ type: automatic-speech-recognition
22
+ name: Automatic Speech Recognition
23
+ dataset:
24
+ name: cslu_scripted
25
+ type: asr
26
+ config: en
27
+ split: test
28
+ metrics:
29
+ - type: wer
30
+ value: 55.51
31
+ name: WER
32
+ - task:
33
+ type: automatic-speech-recognition
34
+ name: Automatic Speech Recognition
35
+ dataset:
36
+ name: cslu_spontaneous
37
+ type: asr
38
+ config: en
39
+ split: test
40
+ metrics:
41
+ - type: wer
42
+ value: 28.53
43
+ name: WER
44
+ - task:
45
+ type: automatic-speech-recognition
46
+ name: Automatic Speech Recognition
47
+ dataset:
48
+ name: librispeech
49
+ type: asr
50
+ config: en
51
+ split: testclean
52
+ metrics:
53
+ - type: wer
54
+ value: 6.23
55
+ name: WER
56
+
57
+ ---
58
+
59
+
60
+ # openai/whisper-small
61
+
62
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
63
+ It achieves the following results on the evaluation set:
64
+ - Loss: 0.26971688866615295
65
+ - Wer: 8.508066331024994
66
+
67
+
68
+ ## Training and evaluation data
69
+
70
+ - Training data: Myst Train (125 hours)
71
+ - Evaluation data: Myst Dev (20.9 hours)
72
+
73
+
74
+ ### Training hyperparameters
75
+
76
+ The following hyperparameters were used during training:
77
+ - learning_rate: 1e-05
78
+ - train_batch_size: 64
79
+ - eval_batch_size: 32
80
+ - seed: 42
81
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
82
+ - lr_scheduler_type: linear
83
+ - lr_scheduler_warmup_steps: 500
84
+ - training_steps: 10000
85
+