Whisper Serbian
Collection
Fine tuned Whisper on Serbian datasets
•
4 items
•
Updated
Use an updated fine tunned version Sagicc/whisper-large-v3-sr-cmb with new 50+ hours of dataset.
This model is a fine-tuned version of openai/whisper-large-v3 on Serbian Mozilla/Common Voice 13 and Google/Fleurs datasets. It achieves the following results on the evaluation set:
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
---|---|---|---|---|---|
0.0567 | 1.34 | 500 | 0.1512 | 0.1676 | 0.0717 |
0.0256 | 2.67 | 1000 | 0.1482 | 0.1585 | 0.0610 |
0.0114 | 4.01 | 1500 | 0.1628 | 0.1635 | 0.0556 |