You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

whisper-large-v3-Punjabi-Version1

This model is a fine-tuned version of openai/whisper-large-v3 on the fleurs dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1883
  • Wer: 44.8199

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2000
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.2523 7.4627 2000 0.3047 62.7791
0.1706 14.9254 4000 0.2324 52.0393
0.1466 22.3881 6000 0.2120 49.2781
0.1411 29.8507 8000 0.2019 47.2388
0.1294 37.3134 10000 0.1962 46.3456
0.1155 44.7761 12000 0.1926 45.5716
0.1196 52.2388 14000 0.1905 44.9539
0.1111 59.7015 16000 0.1889 44.8199
0.1066 67.1642 18000 0.1883 44.5743
0.1138 74.6269 20000 0.1883 44.8199

Framework versions

  • PEFT 0.12.1.dev0
  • Transformers 4.45.0.dev0
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for khushi1234455687/whisper-large-v3-Punjabi-Version1

Finetuned
(431)
this model

Dataset used to train khushi1234455687/whisper-large-v3-Punjabi-Version1