Edit model card

./4607

This model is a fine-tuned version of openai/whisper-large-v3 on the 4607 FULL-2024-09-26 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5059
  • Wer Ortho: 28.4797
  • Wer: 21.0751

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • training_steps: 1400
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Ortho Wer
0.9206 0.7715 200 0.6309 33.9104 25.9900
0.6533 1.5429 400 0.5581 30.2736 22.5910
0.5875 2.3144 600 0.5322 29.5128 22.8648
0.5351 3.0858 800 0.5176 29.3103 21.8431
0.5126 3.8573 1000 0.5112 28.7100 21.3222
0.4956 4.6287 1200 0.5063 28.6053 21.0751
0.4785 5.4002 1400 0.5059 28.4797 21.0751

Framework versions

  • Transformers 4.45.1
  • Pytorch 1.13.1+cu117
  • Datasets 3.0.1
  • Tokenizers 0.20.0
Downloads last month
4
Safetensors
Model size
1.61B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Makkoen/whisper-large-v3-cit-do015-wd0-lr1e-06-FULL4

Finetuned
(301)
this model