Edit model card

whisper-small-ne-NP

This model is a fine-tuned version of openai/whisper-small on the common_voice_13_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6005
  • Wer: 57.3876

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.9935 0.17 100 1.3460 91.4347
0.6624 0.35 200 1.0307 85.6531
0.5002 0.52 300 0.8406 77.5161
0.4426 0.7 400 0.7038 76.2313
0.3063 0.87 500 0.5308 71.5203
0.1949 1.05 600 0.5200 66.1670
0.1974 1.22 700 0.5140 65.0964
0.1734 1.4 800 0.4423 67.6660
0.1619 1.57 900 0.4705 62.0985
0.1697 1.75 1000 0.4676 67.0236
0.1536 1.92 1100 0.4441 62.7409
0.0722 2.1 1200 0.4492 58.0300
0.0674 2.27 1300 0.4597 59.9572
0.0766 2.45 1400 0.4720 62.3126
0.0732 2.62 1500 0.4720 60.5996
0.0737 2.8 1600 0.4704 61.0278
0.0833 2.97 1700 0.4711 59.7430
0.0421 3.15 1800 0.5040 60.5996
0.0444 3.32 1900 0.5096 62.5268
0.0343 3.5 2000 0.5276 62.5268
0.0347 3.67 2100 0.5068 57.3876
0.0326 3.85 2200 0.5143 59.3148
0.0219 4.02 2300 0.5225 59.3148
0.0129 4.2 2400 0.5353 59.1006
0.0159 4.37 2500 0.5639 56.9593
0.0168 4.55 2600 0.5303 55.8887
0.0131 4.72 2700 0.5455 58.6724
0.0122 4.9 2800 0.5548 56.5310
0.0035 5.07 2900 0.5661 56.7452
0.0027 5.24 3000 0.5789 57.6017
0.0034 5.42 3100 0.5887 59.1006
0.0047 5.59 3200 0.5853 59.9572
0.0054 5.77 3300 0.5912 58.4582
0.0042 5.94 3400 0.5862 59.3148
0.0013 6.12 3500 0.5935 56.7452
0.001 6.29 3600 0.5991 57.3876
0.0008 6.47 3700 0.6012 57.6017
0.0014 6.64 3800 0.6002 57.8158
0.001 6.82 3900 0.6006 57.8158
0.0013 6.99 4000 0.6005 57.3876

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results