whisper-tiny-papi / README.md
sonnygeorge's picture
End of training
2db5959
|
raw
history blame
No virus
3.19 kB
metadata
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - ASR
  - Papiamentu
  - Whisper
  - Speech Recognition
  - generated_from_trainer
datasets:
  - sonnygeorge/papi_asr_corpus
metrics:
  - wer
model-index:
  - name: Whisper Tiny Papiamentu
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Papi ASR
          type: sonnygeorge/papi_asr_corpus
        metrics:
          - name: Wer
            type: wer
            value: 85.22752415369865

Whisper Tiny Papiamentu

This model is a fine-tuned version of openai/whisper-tiny on the Papi ASR dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0809
  • Wer: 85.2275
  • Cer: 34.1432

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
No log 0.01 5 4.6530 101.2317 53.1862
No log 0.01 10 4.6414 102.1683 53.8408
No log 0.02 15 4.6073 102.0577 53.5251
No log 0.03 20 4.5651 101.3275 52.8798
4.5851 0.04 25 4.3546 100.9956 52.4930
4.5851 0.04 30 4.2726 98.9158 50.6731
4.5851 0.05 35 4.1573 98.3922 50.3358
4.5851 0.06 40 3.9879 96.8360 48.6228
4.5851 0.07 45 3.8552 96.4747 47.4498
4.1061 0.07 50 3.6580 94.7710 47.1001
4.1061 0.08 55 3.4486 93.1780 45.8250
4.1061 0.09 60 3.2790 94.1441 46.0849
4.1061 0.1 65 3.1437 93.4730 45.1332
4.1061 0.1 70 3.0021 91.7029 42.6202
3.2929 0.11 75 2.8536 92.1086 41.3095
3.2929 0.12 80 2.7078 90.1468 38.6819
3.2929 0.13 85 2.5575 88.8045 37.2195
3.2929 0.13 90 2.4029 88.6865 36.6887
3.2929 0.14 95 2.2431 89.0257 35.9986
2.4377 0.15 100 2.0809 85.2275 34.1432

Framework versions

  • Transformers 4.37.0.dev0
  • Pytorch 2.1.0+cu121
  • Datasets 2.15.0
  • Tokenizers 0.15.0