emilios's picture
update model card README.md
6a213a7
|
raw
history blame
2.35 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - common_voice_11_0
metrics:
  - wer
model-index:
  - name: emilios/whisper-medium-el-n2
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: common_voice_11_0
          type: common_voice_11_0
          config: el
          split: test
          args: el
        metrics:
          - name: Wer
            type: wer
            value: 9.964710252600298

emilios/whisper-medium-el-n2

This model is a fine-tuned version of emilios/whisper-medium-el-n2 on the common_voice_11_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5674
  • Wer: 9.9647

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • distributed_type: multi-GPU
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss Wer
0.0014 58.82 1000 0.4951 10.3640
0.0006 117.65 2000 0.5181 10.2805
0.0007 175.82 3000 0.5317 10.1133
0.0004 234.65 4000 0.5396 10.1226
0.0004 293.47 5000 0.5532 10.1040
0.0013 352.29 6000 0.5645 10.0854
0.0002 411.12 7000 0.5669 10.1133
0.0001 469.94 8000 0.5669 9.8997
0.0001 528.76 9000 0.5645 9.9276
0.0001 587.82 10000 0.5674 9.9647

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 2.0.0.dev20221216+cu116
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2