Edit model card

Whisper Small dysarthric Dutch

This model is a fine-tuned version of qmeeus/whisper-small-nl on the data/copas copas-full dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4702
  • Wer: 22.1638

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.1618 0.05 500 0.3787 28.9235
0.0583 1.05 1000 0.3732 25.7702
0.0382 2.05 1500 0.4001 25.4621
0.0316 3.05 2000 0.4081 24.7010
0.0169 4.05 2500 0.4325 24.1935
0.0153 5.05 3000 0.4325 33.4179
0.0074 6.05 3500 0.4367 23.9398
0.0096 7.05 4000 0.4390 23.3055
0.0054 8.05 4500 0.4441 23.7042
0.0032 9.04 5000 0.4493 23.2693
0.004 10.04 5500 0.4524 23.3418
0.0048 11.04 6000 0.4498 23.7224
0.001 12.04 6500 0.4577 22.8887
0.0002 13.04 7000 0.4577 22.0913
0.0001 14.04 7500 0.4616 22.1276
0.0001 15.04 8000 0.4639 22.2726
0.0001 16.04 8500 0.4662 22.1095
0.0001 17.04 9000 0.4684 22.1457
0.0001 18.04 9500 0.4697 22.1457
0.0001 19.04 10000 0.4702 22.1638

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.12.1+cu116
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results