ymoslem's picture
End of training
7373bf5 verified
metadata
language:
  - ga
  - en
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
datasets:
  - ymoslem/IWSLT2023-GA-EN
  - ymoslem/FLEURS-GA-EN
  - ymoslem/BitesizeIrish-GA-EN
  - ymoslem/SpokenWords-GA-EN-MTed
metrics:
  - bleu
  - wer
model-index:
  - name: Whisper Small GA-EN Speech Translation Raw + warmup_ratio=0.01
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: IWSLT-2023, FLEURS, BiteSize, and SpokenWords
          type: ymoslem/IWSLT2023-GA-EN
        metrics:
          - name: Bleu
            type: bleu
            value: 30.14
          - name: Wer
            type: wer
            value: 68.75281404772625

Whisper Small GA-EN Speech Translation Raw + warmup_ratio=0.01

This model is a fine-tuned version of openai/whisper-small on the IWSLT-2023, FLEURS, BiteSize, and SpokenWords dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6820
  • Bleu: 30.14
  • Chrf: 44.97
  • Wer: 68.7528

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.01
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Chrf Wer
2.0448 0.2155 100 1.7532 10.55 28.24 117.7848
1.5028 0.4310 200 1.5587 17.12 35.11 93.2913
1.3161 0.6466 300 1.5073 17.41 37.84 104.0973
1.0537 0.8621 400 1.4560 17.49 38.19 103.8721
0.4704 1.0776 500 1.4946 16.44 36.79 102.9266
0.4589 1.2931 600 1.5117 21.55 38.66 82.6204
0.4343 1.5086 700 1.5146 24.5 41.43 75.5966
0.4068 1.7241 800 1.4547 28.37 45.27 65.6011
0.3757 1.9397 900 1.4957 27.0 43.66 67.4471
0.1388 2.1552 1000 1.5642 26.72 41.74 66.5016
0.1472 2.3707 1100 1.5845 27.74 42.67 68.1675
0.1408 2.5862 1200 1.5932 28.95 44.07 65.4660
0.1436 2.8017 1300 1.5808 27.66 42.71 68.3476
0.1094 3.0172 1400 1.5684 27.1 43.29 71.3643
0.0677 3.2328 1500 1.6287 26.92 42.33 68.4376
0.0567 3.4483 1600 1.6431 23.58 42.85 81.7199
0.0594 3.6638 1700 1.6084 26.54 42.45 77.4426
0.0623 3.8793 1800 1.5817 29.29 45.85 67.8523
0.0323 4.0948 1900 1.6630 29.24 43.47 67.5822
0.0317 4.3103 2000 1.6494 26.43 43.4 73.3904
0.0325 4.5259 2100 1.6968 27.06 42.4 68.5277
0.025 4.7414 2200 1.6436 29.16 44.24 67.5371
0.0316 4.9569 2300 1.6412 29.9 46.15 66.5016
0.0159 5.1724 2400 1.6714 29.56 44.68 66.7717
0.0158 5.3879 2500 1.6458 29.56 45.46 65.8262
0.015 5.6034 2600 1.6595 29.97 44.9 68.0774
0.0145 5.8190 2700 1.6545 31.15 46.35 65.6461
0.0106 6.0345 2800 1.6724 30.24 45.36 66.8618
0.0076 6.25 2900 1.6834 30.25 45.13 67.9424
0.0049 6.4655 3000 1.6820 30.14 44.97 68.7528

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1