Finetune Whisper on Frisian and English
Collection
Assessing Knowledge-Distillation Based Compression of Whisper Model for Frisian ASR
•
12 items
•
Updated
This model is a fine-tuned version of openai/whisper-small on the librispeech dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.5827 | 5.1282 | 100 | 0.7468 | 3.4509 |
0.3801 | 10.2564 | 200 | 0.5781 | 3.4856 |
0.1166 | 15.3846 | 300 | 0.2330 | 3.8872 |
0.0469 | 20.5128 | 400 | 0.1750 | 4.1053 |
0.0249 | 25.6410 | 500 | 0.1637 | 4.1277 |
0.0173 | 30.7692 | 600 | 0.1609 | 4.1297 |
0.0119 | 35.8974 | 700 | 0.1604 | 4.1358 |
0.0087 | 41.0256 | 800 | 0.1607 | 4.1501 |
0.0074 | 46.1538 | 900 | 0.1609 | 4.1460 |
0.0071 | 51.2821 | 1000 | 0.1610 | 4.1481 |
Base model
openai/whisper-small