You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Visualize in Weights & Biases

Whisper-WOLOF-40-hours-Kallaama-dataset

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2031
  • Wer: 31.4362
  • Cer: 17.5733

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Wer Cer
2.6381 0.8865 500 1.4538 78.5863 46.2285
1.146 1.7730 1000 0.9648 65.8870 39.5269
0.8223 2.6596 1500 0.7969 51.5679 30.6941
0.6542 3.5461 2000 0.7221 51.9664 31.0189
0.538 4.4326 2500 0.6842 43.1480 25.9115
0.4373 5.3191 3000 0.6807 42.8274 25.0277
0.3507 6.2057 3500 0.6903 38.5395 22.5312
0.2682 7.0922 4000 0.7086 34.8233 19.8022
0.1924 7.9787 4500 0.7283 36.1227 21.9131
0.1324 8.8652 5000 0.7627 39.5530 24.4465
0.0964 9.7518 5500 0.7997 36.9196 21.1344
0.0632 10.6383 6000 0.8383 34.6327 19.2284
0.0444 11.5248 6500 0.8721 36.6078 21.0329
0.0298 12.4113 7000 0.8992 33.0215 18.5752
0.0204 13.2979 7500 0.9203 32.8742 18.1360
0.0158 14.1844 8000 0.9349 32.7096 18.4977
0.013 15.0709 8500 0.9601 34.0437 18.9774
0.0108 15.9574 9000 0.9658 33.1341 18.5401
0.0095 16.8440 9500 0.9900 33.7318 18.9424
0.0084 17.7305 10000 1.0027 33.8444 19.5254
0.0075 18.6170 10500 1.0261 32.5710 18.4405
0.0065 19.5035 11000 1.0283 32.6923 18.2670
0.0057 20.3901 11500 1.0342 32.2852 18.2818
0.0054 21.2766 12000 1.0449 32.4324 18.3870
0.0053 22.1631 12500 1.0484 32.4411 18.4423
0.0046 23.0496 13000 1.0533 32.2419 18.1508
0.004 23.9362 13500 1.0694 32.4584 17.9995
0.0035 24.8227 14000 1.0747 32.1812 17.8998
0.0033 25.7092 14500 1.0827 32.6836 18.3371
0.0032 26.5957 15000 1.0824 32.7616 18.4940
0.0026 27.4823 15500 1.0965 31.9820 17.8482
0.0025 28.3688 16000 1.1104 31.6875 17.6120
0.0022 29.2553 16500 1.1076 33.1947 18.6453
0.0021 30.1418 17000 1.1103 32.6836 18.3519
0.0018 31.0284 17500 1.1136 31.4103 17.1415
0.0017 31.9149 18000 1.1142 32.0599 18.0844
0.0018 32.8014 18500 1.1220 31.5142 17.7209
0.0014 33.6879 19000 1.1192 31.8867 17.7873
0.001 34.5745 19500 1.1256 32.0773 18.1305
0.0008 35.4610 20000 1.1326 31.8347 17.5991
0.0009 36.3475 20500 1.1430 31.3756 17.3426
0.0007 37.2340 21000 1.1430 31.6268 17.7393
0.0008 38.1206 21500 1.1548 32.2765 18.3814
0.0008 39.0071 22000 1.1528 31.0638 17.1655
0.0005 39.8936 22500 1.1551 31.3669 17.4589
0.0006 40.7801 23000 1.1581 31.3150 17.3611
0.0005 41.6667 23500 1.1650 31.9387 17.9349
0.0003 42.5532 24000 1.1804 30.6566 16.8278
0.0003 43.4397 24500 1.1770 31.7827 17.7818
0.0002 44.3262 25000 1.1848 31.8520 18.0844
0.0003 45.2128 25500 1.1853 31.9040 17.9534
0.0002 46.0993 26000 1.1888 32.1552 18.2836
0.0002 46.9858 26500 1.1925 31.4709 17.6637
0.0002 47.8723 27000 1.1984 31.8174 17.8260
0.0001 48.7589 27500 1.2009 31.4016 17.5972
0.0001 49.6454 28000 1.2031 31.4362 17.5733

Framework versions

  • Transformers 4.46.0
  • Pytorch 2.1.0+cu118
  • Datasets 3.0.2
  • Tokenizers 0.20.1
Downloads last month
2
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for asr-africa/Whisper-WOLOF-40-hours-Kallaama-dataset

Finetuned
(2322)
this model