Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Whisper Large v3

This model is a fine-tuned version of openai/whisper-large-v3 on the ASR_BB_and_EC dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2251
  • Wer: 145.1895

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
3.4667 0.1142 50 2.3898 126.5306
1.3136 0.2283 100 0.8863 65.3061
0.6296 0.3425 150 0.7403 55.9767
0.551 0.4566 200 0.6749 61.2245
0.4789 0.5708 250 0.6446 67.6385
0.4246 0.6849 300 0.5675 77.5510
0.3786 0.7991 350 0.5163 45.4810
0.3179 0.9132 400 0.4786 84.8397
0.3118 1.0274 450 0.4678 105.5394
0.2689 1.1416 500 0.4322 125.3644
0.2473 1.2557 550 0.3924 48.1050
0.2319 1.3699 600 0.3980 208.7464
0.2098 1.4840 650 0.3545 52.1866
0.2215 1.5982 700 0.3489 48.1050
0.1981 1.7123 750 0.3378 76.3848
0.1803 1.8265 800 0.3295 43.7318
0.1693 1.9406 850 0.3095 76.9679
0.1406 2.0548 900 0.2993 43.4402
0.1252 2.1689 950 0.2810 37.3178
0.111 2.2831 1000 0.2854 164.1399
0.1166 2.3973 1050 0.2752 124.4898
0.1183 2.5114 1100 0.2493 90.3790
0.1014 2.6256 1150 0.2441 210.2041
0.1076 2.7397 1200 0.2340 152.1866
0.0891 2.8539 1250 0.2312 214.5773
0.0841 2.9680 1300 0.2251 145.1895

Framework versions

  • PEFT 0.13.2
  • Transformers 4.45.2
  • Pytorch 2.2.0
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for miosipof/whisper_large_BB_and_EC_v1

Adapter
(74)
this model

Evaluation results