emergency_01

This model is a fine-tuned version of openai/whisper-tiny on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3525
  • Wer: 18.0374

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • training_steps: 3500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0289 15.625 500 0.2118 18.2243
0.0061 31.25 1000 0.2911 20.1869
0.0007 46.875 1500 0.3276 18.9720
0.0002 62.5 2000 0.3383 18.1308
0.0001 78.125 2500 0.3457 18.1308
0.0001 93.75 3000 0.3507 18.0374
0.0001 109.375 3500 0.3525 18.0374

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
10
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ernistts/emergency_01

Finetuned
(1345)
this model