WasuratS commited on
Commit
5132836
1 Parent(s): dad283f

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - PolyAI/minds14
7
+ metrics:
8
+ - wer
9
+ model-index:
10
+ - name: whisper-tiny-en-finetune-minds14
11
+ results:
12
+ - task:
13
+ name: Automatic Speech Recognition
14
+ type: automatic-speech-recognition
15
+ dataset:
16
+ name: PolyAI/minds14
17
+ type: PolyAI/minds14
18
+ config: en-US
19
+ split: train[450:]
20
+ args: en-US
21
+ metrics:
22
+ - name: Wer
23
+ type: wer
24
+ value: 0.3382526564344746
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # whisper-tiny-en-finetune-minds14
31
+
32
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.6541
35
+ - Wer Ortho: 0.3399
36
+ - Wer: 0.3383
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 1e-05
56
+ - train_batch_size: 8
57
+ - eval_batch_size: 8
58
+ - seed: 42
59
+ - distributed_type: multi-GPU
60
+ - gradient_accumulation_steps: 2
61
+ - total_train_batch_size: 16
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_steps: 50
65
+ - training_steps: 1000
66
+ - mixed_precision_training: Native AMP
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
72
+ | 0.3136 | 3.57 | 100 | 0.4883 | 0.3640 | 0.3524 |
73
+ | 0.0417 | 7.14 | 200 | 0.5146 | 0.3560 | 0.3442 |
74
+ | 0.0066 | 10.71 | 300 | 0.5736 | 0.3411 | 0.3353 |
75
+ | 0.0017 | 14.29 | 400 | 0.6040 | 0.3455 | 0.3418 |
76
+ | 0.0013 | 17.86 | 500 | 0.6226 | 0.3393 | 0.3365 |
77
+ | 0.0009 | 21.43 | 600 | 0.6352 | 0.3393 | 0.3365 |
78
+ | 0.0007 | 25.0 | 700 | 0.6436 | 0.3399 | 0.3371 |
79
+ | 0.0006 | 28.57 | 800 | 0.6492 | 0.3399 | 0.3383 |
80
+ | 0.0006 | 32.14 | 900 | 0.6530 | 0.3399 | 0.3383 |
81
+ | 0.0006 | 35.71 | 1000 | 0.6541 | 0.3399 | 0.3383 |
82
+
83
+
84
+ ### Framework versions
85
+
86
+ - Transformers 4.29.2
87
+ - Pytorch 1.13.1+cu117
88
+ - Datasets 2.12.0
89
+ - Tokenizers 0.13.3