Edit model card

whisper-small-en

This model is a fine-tuned version of openai/whisper-small on the librispeech_asr dataset. It achieves the following results on the evaluation set:

  • Loss: 6.7832
  • Wer: 124.5115

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • training_steps: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
9.6259 1.57 5 10.7408 1127.3535
11.5288 3.29 10 9.2534 100.0
10.9249 4.86 15 7.8357 100.0
7.0442 6.57 20 6.9971 595.3819
8.6762 8.29 25 5.6135 312.2558
5.4239 9.86 30 5.4885 97.1581
4.986 11.57 35 5.2888 628.7744
6.708 13.29 40 4.9665 277.6199
3.9096 14.86 45 5.0861 631.9716
3.2326 16.57 50 5.0090 279.7513
3.9691 18.29 55 5.0804 133.2149
1.8661 19.86 60 5.4423 317.5844
1.1588 21.57 65 5.7955 119.5382
1.0355 23.29 70 6.0458 190.2309
0.3455 24.86 75 6.3057 106.7496
0.142 26.57 80 6.5767 209.9467
0.1722 28.29 85 6.5937 101.4210
0.0816 29.86 90 6.7679 149.7336
0.079 31.57 95 6.8008 133.5702
0.1007 33.29 100 6.7832 124.5115

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.12.1+cu113
  • Datasets 2.7.1
  • Tokenizers 0.13.2
Downloads last month
30
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train bgstud/whisper-small-en

Evaluation results