whisper-synthesized-turkish-8-hour
This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.2300
- Wer: 23.0527
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
1.2682 | 0.52 | 100 | 0.5845 | 99.7901 |
0.4591 | 1.04 | 200 | 0.3895 | 21.4541 |
0.2482 | 1.56 | 300 | 0.2241 | 12.2145 |
0.1554 | 2.08 | 400 | 0.2092 | 11.7825 |
0.096 | 2.6 | 500 | 0.2035 | 13.9057 |
0.0765 | 3.12 | 600 | 0.2052 | 11.2517 |
0.0424 | 3.65 | 700 | 0.2024 | 13.4490 |
0.0403 | 4.17 | 800 | 0.2094 | 12.0849 |
0.0216 | 4.69 | 900 | 0.2049 | 13.1959 |
0.0201 | 5.21 | 1000 | 0.2079 | 12.1034 |
0.0101 | 5.73 | 1100 | 0.2073 | 12.5663 |
0.0131 | 6.25 | 1200 | 0.2093 | 16.7757 |
0.0088 | 6.77 | 1300 | 0.2121 | 16.5165 |
0.0073 | 7.29 | 1400 | 0.2142 | 15.3314 |
0.0036 | 7.81 | 1500 | 0.2183 | 13.7020 |
0.0047 | 8.33 | 1600 | 0.2159 | 16.1647 |
0.0038 | 8.85 | 1700 | 0.2166 | 13.7514 |
0.0027 | 9.38 | 1800 | 0.2172 | 19.9975 |
0.0028 | 9.9 | 1900 | 0.2183 | 18.2385 |
0.0015 | 10.42 | 2000 | 0.2196 | 17.4238 |
0.0023 | 10.94 | 2100 | 0.2192 | 14.7019 |
0.0012 | 11.46 | 2200 | 0.2216 | 15.9919 |
0.0017 | 11.98 | 2300 | 0.2215 | 19.6334 |
0.001 | 12.5 | 2400 | 0.2219 | 20.5160 |
0.0014 | 13.02 | 2500 | 0.2236 | 21.7813 |
0.0011 | 13.54 | 2600 | 0.2242 | 23.0897 |
0.0009 | 14.06 | 2700 | 0.2276 | 25.0401 |
0.001 | 14.58 | 2800 | 0.2269 | 18.7014 |
0.001 | 15.1 | 2900 | 0.2265 | 20.8554 |
0.0008 | 15.62 | 3000 | 0.2272 | 19.7013 |
0.0009 | 16.15 | 3100 | 0.2277 | 26.5831 |
0.0007 | 16.67 | 3200 | 0.2290 | 24.3427 |
0.0008 | 17.19 | 3300 | 0.2285 | 20.7011 |
0.0007 | 17.71 | 3400 | 0.2288 | 21.8738 |
0.0007 | 18.23 | 3500 | 0.2290 | 20.7258 |
0.0006 | 18.75 | 3600 | 0.2295 | 21.1641 |
0.0006 | 19.27 | 3700 | 0.2297 | 23.7625 |
0.0007 | 19.79 | 3800 | 0.2301 | 24.4044 |
0.0006 | 20.31 | 3900 | 0.2299 | 22.9786 |
0.0006 | 20.83 | 4000 | 0.2300 | 23.0527 |
Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.