lora-whisper-tiny / README.md
charris's picture
charris/whisper_tiny_lora_q_pro_j_proj
5d97a7a verified
metadata
license: apache-2.0
library_name: peft
tags:
  - generated_from_trainer
datasets:
  - audiofolder
metrics:
  - wer
base_model: openai/whisper-tiny
model-index:
  - name: lora-whisper-tiny
    results: []

lora-whisper-tiny

This model is a fine-tuned version of openai/whisper-tiny on the audiofolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1131
  • Wer: 40.8556

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.8796 2.7 200 1.8730 46.8159
1.5505 5.41 400 1.5270 44.5912
1.2514 8.11 600 1.2960 44.1416
1.1319 10.81 800 1.1753 42.2831
1.1388 13.51 1000 1.1591 42.4407
1.1174 16.22 1200 1.1487 43.4789
1.1255 18.92 1400 1.1414 43.0061
1.102 21.62 1600 1.1358 42.5519
1.0848 24.32 1800 1.1310 42.8949
1.0912 27.03 2000 1.1272 41.1337
1.0894 29.73 2200 1.1240 41.6667
1.0697 32.43 2400 1.1216 42.5426
1.064 35.14 2600 1.1193 42.1348
1.0752 37.84 2800 1.1175 41.7825
1.0983 40.54 3000 1.1161 41.7037
1.0948 43.24 3200 1.1150 41.0641
1.0319 45.95 3400 1.1142 40.9807
1.0394 48.65 3600 1.1136 41.4303
1.0602 51.35 3800 1.1132 40.8695
1.0139 54.05 4000 1.1131 40.8556

Framework versions

  • PEFT 0.10.0
  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2