Ranjit commited on
Commit
6468d8e
1 Parent(s): 7c60538

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -1
README.md CHANGED
@@ -1,3 +1,70 @@
1
  ---
2
- license: afl-3.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ tags:
4
+ - whisper-event
5
+ - generated_from_trainer
6
+ datasets:
7
+ - Ranjit/or_in_dataset
8
+ metrics:
9
+ - wer
10
+ model-index:
11
+ - name: Whisper Small Odia 10k steps
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: Ranjit/or_in_dataset
18
+ type: Ranjit/or_in_dataset
19
+ split: test
20
+ metrics:
21
+ - name: Wer
22
+ type: wer
23
+ value: 19.772043934466467
24
  ---
25
+
26
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
27
+ should probably proofread and complete it, then remove this comment. -->
28
+
29
+ # Whisper Small Odia v6.0
30
+
31
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the [Ranjit/or_in_dataset](https://huggingface.co/datasets/Ranjit/or_in_dataset) dataset.
32
+ It achieves the following results on the evaluation set:
33
+ - Loss: 0.2806
34
+ - Wer: 19.7720
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 1e-05
40
+ - train_batch_size: 32
41
+ - eval_batch_size: 32
42
+ - seed: 42
43
+ - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
44
+ - lr_scheduler_type: linear
45
+ - lr_scheduler_warmup_steps: 200
46
+ - training_steps: 10000
47
+ - mixed_precision_training: Native AMP
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
52
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|
53
+ | 0.0384 | 0.49 | 1000 | 0.1349 | 40.3740 |
54
+ | 0.0175 | 0.98 | 2000 | 0.1601 | 22.6468 |
55
+ | 0.0091 | 1.46 | 3000 | 0.1817 | 23.1515 |
56
+ | 0.0082 | 1.95 | 4000 | 0.2125 | 23.9139 |
57
+ | 0.0048 | 2.44 | 5000 | 0.2110 | 20.2522 |
58
+ | 0.0037 | 2.93 | 6000 | 0.2270 | 21.4855 |
59
+ | 0.0017 | 3.42 | 7000 | 0.2534 | 20.2399 |
60
+ | 0.0018 | 3.9 | 8000 | 0.2706 | 20.7277 |
61
+ | 0.0005 | 4.39 | 9000 | 0.2806 | 19.7720 |
62
+ | 0.0006 | 4.88 | 10000 | 0.2929 | 20.9747 |
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.28.0.dev0
68
+ - Pytorch 1.11.0+cu113
69
+ - Datasets 2.10.1
70
+ - Tokenizers 0.13.2