lbourdois's picture
Add multilingual to the language tag
50cfcaa
|
raw
history blame
3.85 kB
metadata
language:
  - sv
  - 'no'
  - da
  - multilingual
license: apache-2.0
tags:
  - whisper-event
  - generated_from_trainer
  - hf-asr-leaderboard
datasets:
  - mozilla-foundation/common_voice_11_0
  - babelbox/babelbox_voice
  - NbAiLab/NST
  - NbAiLab/NPSC
  - google/fleurs
metrics:
  - wer
model-index:
  - name: Whisper Medium Nordic
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: mozilla-foundation/common_voice_11_0
          type: mozilla-foundation/common_voice_11_0
          config: sv-SE
          split: test
        metrics:
          - type: wer
            value: 11.31
            name: Wer
          - type: wer
            value: 14.86
            name: Wer
          - type: wer
            value: 37.02
            name: Wer

Whisper Medium Nordic

This model is a fine-tuned version of openai/whisper-medium on the mozilla-foundation/common_voice_11_0 (sv-SE, da, nn-NO), the babelbox/babelbox_voice (Swedish radio), the NbAiLab/NST (Norwegian radio), the NbAiLab/NPSC (Norwegian parliament) and the google/fleurs (sv_se, da_dk, nb_no) datasets. The goal is to leverage transfer learning across Nordic languages, which have strong similarities.

It achieves the following results on the common voice Swedish test set:

  • Loss: 0.2129
  • Wer: 11.3079

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Please note that a bug during training prevented us from evaluating WER correctly. Validation loss suggests we started overfitting after 5000/6000 steps.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.3056 0.1 1000 0.2670 99.9221
0.16 0.2 2000 0.2322 99.6640
0.1309 0.3 3000 0.2152 98.9759
0.097 0.4 4000 0.2112 100.0
0.091 0.5 5000 0.2094 99.7312
0.1098 0.6 6000 0.2098 98.6077
0.0637 0.7 7000 0.2148 98.4625
0.0718 0.8 8000 0.2151 99.8710
0.0517 0.9 9000 0.2175 97.2342
0.0465 1.0 10000 0.2129 96.3552

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2

WandB run

https://wandb.ai/pn-aa/whisper/runs/xc70fbwv?workspace=user-emilio_marinone

Baseline model

This model finetuned whisper-medium, and here we can observe imrpovements when evaluated on CommonVoice 11 Swedish(sv-SE), Danish(da), and Norwegian (nn-NO) test splits.

Language Whisper Medium (WER) Whisper Medium Nordic (WER)
sv-SE 14.93 11.31
da 20.85 14.86
nn-NO 50.82 37.02