Edit model card

openai/whisper-medium

This model is a fine-tuned version of openai/whisper-medium on the pphuc25/FrenchMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8094
  • Wer: 45.6012

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
1.2928 1.0 215 1.2902 56.6716
0.8083 2.0 430 1.4740 75.1466
0.4531 3.0 645 1.4275 64.5161
0.316 4.0 860 1.6528 53.3724
0.1942 5.0 1075 1.7240 61.3636
0.1557 6.0 1290 1.6985 46.1877
0.1254 7.0 1505 1.8613 52.6393
0.1052 8.0 1720 1.7694 50.6598
0.0719 9.0 1935 1.7321 45.8944
0.0606 10.0 2150 1.8430 49.7801
0.0446 11.0 2365 1.8449 49.7801
0.0387 12.0 2580 1.8400 51.6862
0.0305 13.0 2795 1.8258 57.1114
0.0138 14.0 3010 1.9455 50.1466
0.0104 15.0 3225 1.7864 50.8065
0.0117 16.0 3440 1.8213 46.3343
0.0034 17.0 3655 1.7827 44.5748
0.0023 18.0 3870 1.7990 44.2082
0.0007 19.0 4085 1.8095 44.4282
0.0008 20.0 4300 1.8094 45.6012

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
764M params
Tensor type
F32
·
Inference API
or
This model can be loaded on Inference API (serverless).

Finetuned from