Edit model card

Whisper Large V2

This model is a fine-tuned version of openai/whisper-large-v2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5449
  • Wer: 25.0953

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 12
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 20
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Wer
0.841 0.2239 15 0.5933 48.4347
0.5965 0.4478 30 0.5012 27.9249
0.5018 0.6716 45 0.4670 25.0251
0.4578 0.8955 60 0.4569 27.6390
0.3824 1.1194 75 0.4603 27.2978
0.2738 1.3433 90 0.4537 25.0301
0.2375 1.5672 105 0.4516 24.4632
0.2573 1.7910 120 0.4381 25.3512
0.241 2.0149 135 0.4379 25.4766
0.1265 2.2388 150 0.4624 23.7809
0.1391 2.4627 165 0.4588 26.6406
0.1242 2.6866 180 0.4572 24.7642
0.1227 2.9104 195 0.4561 27.5738
0.0774 3.1343 210 0.4790 24.2474
0.0543 3.3582 225 0.4931 31.8483
0.0506 3.5821 240 0.5087 25.3010
0.056 3.8060 255 0.4933 27.6942
0.0527 4.0299 270 0.5009 26.2543
0.0233 4.2537 285 0.5447 27.8999
0.0193 4.4776 300 0.5458 27.0570
0.0167 4.7015 315 0.5421 24.5384
0.0183 4.9254 330 0.5449 25.0953

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.1.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
20
Safetensors
Model size
1.54B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for golesheed/whisper-v2-North-Brabantic_and_river_area_Guelders

Finetuned
(183)
this model