Harcuracy commited on
Commit
6c913bb
1 Parent(s): 32324ed

End of training

Browse files
Files changed (2) hide show
  1. README.md +8 -7
  2. model.safetensors +1 -1
README.md CHANGED
@@ -25,7 +25,7 @@ model-index:
25
  metrics:
26
  - name: Wer
27
  type: wer
28
- value: 76.7479519908554
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -35,8 +35,8 @@ should probably proofread and complete it, then remove this comment. -->
35
 
36
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
37
  It achieves the following results on the evaluation set:
38
- - Loss: 1.1447
39
- - Wer: 76.7480
40
 
41
  ## Model description
42
 
@@ -63,20 +63,21 @@ The following hyperparameters were used during training:
63
  - total_train_batch_size: 16
64
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
65
  - lr_scheduler_type: linear
66
- - training_steps: 1000
67
  - mixed_precision_training: Native AMP
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Wer |
72
  |:-------------:|:-------:|:----:|:---------------:|:-------:|
73
- | 0.1231 | 5.5556 | 500 | 0.9345 | 77.2052 |
74
- | 0.0087 | 11.1111 | 1000 | 1.1447 | 76.7480 |
 
75
 
76
 
77
  ### Framework versions
78
 
79
  - Transformers 4.47.0
80
- - Pytorch 2.5.1+cu121
81
  - Datasets 3.2.0
82
  - Tokenizers 0.21.0
 
25
  metrics:
26
  - name: Wer
27
  type: wer
28
+ value: 75.33815964945704
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
35
 
36
  This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset.
37
  It achieves the following results on the evaluation set:
38
+ - Loss: 1.2762
39
+ - Wer: 75.3382
40
 
41
  ## Model description
42
 
 
63
  - total_train_batch_size: 16
64
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
65
  - lr_scheduler_type: linear
66
+ - training_steps: 1500
67
  - mixed_precision_training: Native AMP
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Wer |
72
  |:-------------:|:-------:|:----:|:---------------:|:-------:|
73
+ | 0.1066 | 5.5556 | 500 | 0.9370 | 76.7003 |
74
+ | 0.0053 | 11.1111 | 1000 | 1.1919 | 74.9571 |
75
+ | 0.0012 | 16.6667 | 1500 | 1.2762 | 75.3382 |
76
 
77
 
78
  ### Framework versions
79
 
80
  - Transformers 4.47.0
81
+ - Pytorch 2.4.0
82
  - Datasets 3.2.0
83
  - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a3a6cd0d03bc40bf6adb43dea8055587d00484f0dd7ec097fbe597fa6506f88b
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57b0bf77d61fff63369cd591db942cd3149551c73217922929aa011b2311c931
3
  size 966995080