marinone94's picture
update model card
4ed44bb
|
raw
history blame
3.83 kB
metadata
language:
  - sv
  - 'no'
  - da
license: apache-2.0
tags:
  - whisper-event
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_11_0
  - babelbox/babelbox_voice
  - NbAiLab/NST
  - NbAiLab/NPSC
  - google/fleurs
metrics:
  - wer
model-index:
  - name: Whisper Medium Nordic
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: mozilla-foundation/common_voice_11_0
          type: mozilla-foundation/common_voice_11_0
          config: sv-SE
          split: test
        metrics:
          - name: Wer
            type: wer
            value: 11.31
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: mozilla-foundation/common_voice_11_0
          type: mozilla-foundation/common_voice_11_0
          config: da
          split: test
        metrics:
          - name: Wer
            type: wer
            value: 14.86
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: mozilla-foundation/common_voice_11_0
          type: mozilla-foundation/common_voice_11_0
          config: nn-NO
          split: test
        metrics:
          - name: Wer
            type: wer
            value: 37.02

Whisper Medium Nordic

This model is a fine-tuned version of openai/whisper-medium on the mozilla-foundation/common_voice_11_0 (sv-SE, da, nn-NO), the babelbox/babelbox_voice (Swedish radio), the NbAiLab/NST (Norwegian radio), the NbAiLab/NPSC (Norwegian parliament) and the google/fleurs (sv_se, da_dk, nb_no) datasets. The goal is to leverage transfer learning across Nordic languages, which have strong similarities.

It achieves the following results on the common voice Swedish test set:

  • Loss: 0.2129
  • Wer: 11.3079

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Please note that a bug during training prevented us from evaluating WER correctly. Validation loss suggests we started overfitting after 5000/6000 steps.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.3056 0.1 1000 0.2670 99.9221
0.16 0.2 2000 0.2322 99.6640
0.1309 0.3 3000 0.2152 98.9759
0.097 0.4 4000 0.2112 100.0
0.091 0.5 5000 0.2094 99.7312
0.1098 0.6 6000 0.2098 98.6077
0.0637 0.7 7000 0.2148 98.4625
0.0718 0.8 8000 0.2151 99.8710
0.0517 0.9 9000 0.2175 97.2342
0.0465 1.0 10000 0.2129 96.3552

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2