tgrhn commited on
Commit
a2a0dff
1 Parent(s): 961486f

End of training

Browse files
README.md CHANGED
@@ -25,7 +25,7 @@ model-index:
25
  metrics:
26
  - name: Wer
27
  type: wer
28
- value: 20.31449366519903
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -35,8 +35,8 @@ should probably proofread and complete it, then remove this comment. -->
35
 
36
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.1 dataset.
37
  It achieves the following results on the evaluation set:
38
- - Loss: 0.2452
39
- - Wer: 20.3145
40
 
41
  ## Model description
42
 
@@ -56,21 +56,24 @@ More information needed
56
 
57
  The following hyperparameters were used during training:
58
  - learning_rate: 1.25e-05
59
- - train_batch_size: 64
60
- - eval_batch_size: 32
61
  - seed: 42
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_steps: 500
65
- - training_steps: 2500
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer |
71
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
72
- | 0.1194 | 1.46 | 1000 | 0.2538 | 21.3874 |
73
- | 0.0566 | 2.92 | 2000 | 0.2452 | 20.3145 |
 
 
 
74
 
75
 
76
  ### Framework versions
 
25
  metrics:
26
  - name: Wer
27
  type: wer
28
+ value: 20.934495462305687
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
35
 
36
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.1 dataset.
37
  It achieves the following results on the evaluation set:
38
+ - Loss: 0.3657
39
+ - Wer: 20.9345
40
 
41
  ## Model description
42
 
 
56
 
57
  The following hyperparameters were used during training:
58
  - learning_rate: 1.25e-05
59
+ - train_batch_size: 128
60
+ - eval_batch_size: 64
61
  - seed: 42
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_steps: 500
65
+ - training_steps: 5000
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer |
71
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
72
+ | 0.0855 | 2.92 | 1000 | 0.2497 | 21.0261 |
73
+ | 0.0143 | 5.83 | 2000 | 0.2964 | 21.4700 |
74
+ | 0.0026 | 8.75 | 3000 | 0.3394 | 20.9597 |
75
+ | 0.0012 | 11.66 | 4000 | 0.3584 | 20.9201 |
76
+ | 0.0009 | 14.58 | 5000 | 0.3657 | 20.9345 |
77
 
78
 
79
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3c8ad74218c3588867f055cdfc2214674298b8506a4fe45b84487a62f3699505
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:faa5334e75eb3d20c1b856688ddee49d6de495080f3f33385107d93d3431a920
3
  size 966995080
runs/Mar10_21-55-07_aitest2/events.out.tfevents.1710096908.aitest2.53922.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f3a57d55c6caba16baaab9a7122e5ed141d259e2aa330188e54ce12a80d00113
3
- size 49179
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fdf429e16d35757b251a603fd7ce4c4378023bea7845084e27576a4e2c7f522
3
+ size 49533