jaymanvirk commited on
Commit
bdd3a46
1 Parent(s): bfeed3c

End of training

Browse files
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Wer
24
  type: wer
25
- value: 0.3707201889020071
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,9 +32,9 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.7346
36
- - Wer Ortho: 0.3646
37
- - Wer: 0.3707
38
 
39
  ## Model description
40
 
@@ -57,17 +57,23 @@ The following hyperparameters were used during training:
57
  - train_batch_size: 8
58
  - eval_batch_size: 8
59
  - seed: 42
 
 
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
- - lr_scheduler_type: constant_with_warmup
62
- - lr_scheduler_warmup_steps: 50
63
- - training_steps: 500
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
70
- | 0.0175 | 8.93 | 500 | 0.7346 | 0.3646 | 0.3707 |
 
 
 
 
71
 
72
 
73
  ### Framework versions
 
22
  metrics:
23
  - name: Wer
24
  type: wer
25
+ value: 0.3317591499409681
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.6168
36
+ - Wer Ortho: 0.3263
37
+ - Wer: 0.3318
38
 
39
  ## Model description
40
 
 
57
  - train_batch_size: 8
58
  - eval_batch_size: 8
59
  - seed: 42
60
+ - gradient_accumulation_steps: 2
61
+ - total_train_batch_size: 16
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_ratio: 0.1
65
+ - num_epochs: 5
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
72
+ | 0.0145 | 1.0 | 28 | 0.6196 | 0.3060 | 0.3093 |
73
+ | 0.0088 | 2.0 | 56 | 0.6278 | 0.3288 | 0.3306 |
74
+ | 0.007 | 3.0 | 84 | 0.6202 | 0.3307 | 0.3353 |
75
+ | 0.0009 | 4.0 | 112 | 0.6148 | 0.3245 | 0.3294 |
76
+ | 0.0006 | 5.0 | 140 | 0.6168 | 0.3263 | 0.3318 |
77
 
78
 
79
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ab980001af2854b4786a35c28f6cdd88a15d0b857e423b4e037cdb903b31bd23
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04360a751a0a0b7a0e31828296a07d4cf0ee9822f761650aeef639863c53c2e7
3
  size 151061672
runs/Mar20_06-13-22_223e8173d876/events.out.tfevents.1710915210.223e8173d876.34.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:997210d7d22026c848fc26e81b5346c82f31bb820711590c120c4c66f573acba
3
- size 12408
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b520c7ec174617bdb10050a2965c88478d0c27250795779d9d7e120993e9eb4
3
+ size 14387