Edit model card

Whisper small Ar - AxAI

This model is a fine-tuned version of openai/whisper-small on the Client dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5990
  • Wer: 84.1146

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.8044 6.37 200 1.2417 69.9219
0.036 12.75 400 1.1791 60.9375
0.0108 19.12 600 1.3128 80.2083
0.0035 25.5 800 1.3641 62.6953
0.0009 31.87 1000 1.4066 66.6016
0.0004 38.25 1200 1.4410 64.5833
0.0003 44.62 1400 1.4712 63.3464
0.0002 51.0 1600 1.4927 63.6068
0.0002 57.37 1800 1.5102 67.1875
0.0002 63.75 2000 1.5254 66.6016
0.0001 70.12 2200 1.5393 77.8646
0.0001 76.49 2400 1.5512 77.9297
0.0001 82.87 2600 1.5616 77.7344
0.0001 89.24 2800 1.5710 83.1380
0.0001 95.62 3000 1.5791 88.0859
0.0001 101.99 3200 1.5854 88.1510
0.0001 108.37 3400 1.5910 88.0859
0.0001 114.74 3600 1.5953 84.1146
0.0001 121.12 3800 1.5978 84.1797
0.0001 127.49 4000 1.5990 84.1146

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
14
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for UsmanAXAI/whisper-small-ft-client

Finetuned
(1959)
this model

Evaluation results