Finetune Whisper on Frisian and English
Collection
Assessing Knowledge-Distillation Based Compression of Whisper Model for Frisian ASR
•
12 items
•
Updated
This model is a fine-tuned version of distil-small.en on the mozilla-foundation/common_voice_6_fy_NL dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.2482 | 5.6180 | 500 | 1.8089 | 66.9720 |
0.1076 | 11.2360 | 1000 | 1.8466 | 62.2349 |
0.0448 | 16.8539 | 1500 | 1.9436 | 59.3548 |
0.0062 | 22.4719 | 2000 | 1.8986 | 56.5960 |
0.0016 | 28.0899 | 2500 | 1.9025 | 54.4324 |
0.0001 | 33.7079 | 3000 | 1.9212 | 54.3005 |