Edit model card

whisper-a-nomimose-trial

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0481
  • Wer: 132.6667

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 132
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.6555 1.0 109 0.8617 56.6667
0.154 2.0 218 0.2293 52.3333
0.0778 3.0 327 0.3132 63.3333
0.0633 4.0 436 0.1483 80.0
0.0503 5.0 545 0.3422 54.6667
0.0533 6.0 654 0.1111 186.0
0.0332 7.0 763 0.1978 101.0
0.0242 8.0 872 0.1586 62.0
0.0204 9.0 981 0.0867 59.3333
0.0191 10.0 1090 0.1203 179.0
0.0139 11.0 1199 0.0813 40.6667
0.0083 12.0 1308 0.0827 60.0
0.0052 13.0 1417 0.0945 131.3333
0.0055 14.0 1526 0.0923 182.6667
0.0055 15.0 1635 0.0937 152.6667
0.0045 16.0 1744 0.0532 117.6667
0.0024 17.0 1853 0.0575 126.3333
0.002 18.0 1962 0.0505 136.0
0.0022 19.0 2071 0.0491 136.3333
0.0013 19.8203 2160 0.0481 132.6667

Framework versions

  • Transformers 4.47.0.dev0
  • Pytorch 2.4.0
  • Datasets 3.0.1
  • Tokenizers 0.20.0
Downloads last month
11
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for susmitabhatt/whisper-a-nomimose-trial

Finetuned
(1968)
this model