whisper-medium-eu / README.md
xezpeleta's picture
Update ggml format model
f1570bc
|
raw
history blame
2.43 kB
metadata
language:
  - eu
license: apache-2.0
tags:
  - whisper-event
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_13_0
metrics:
  - wer
model-index:
  - name: Whisper Small Basque
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: mozilla-foundation/common_voice_13_0 eu
          type: mozilla-foundation/common_voice_13_0
          config: eu
          split: test
          args: eu
        metrics:
          - name: Wer
            type: wer
            value: 13.179958686054519

Whisper Small Basque

This model is a fine-tuned version of openai/whisper-medium on the mozilla-foundation/common_voice_13_0 eu dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2201
  • Wer: 13.1800

Model description

More information needed

Intended uses & limitations

If you need to use this model with whisper.cpp, you can download the ggml file: ggml-medium-eu.bin

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 7000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.4203 0.14 1000 0.4128 28.2656
0.2693 0.29 2000 0.3240 22.0523
0.2228 0.43 3000 0.2737 18.1437
0.1002 1.1 4000 0.2554 16.3534
0.0863 1.24 5000 0.2351 14.7880
0.0636 1.39 6000 0.2251 13.5971
0.0271 2.06 7000 0.2201 13.1800

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.8.1.dev0
  • Tokenizers 0.13.2