jangmin's picture
Update README.md
6c48012
metadata
license: apache-2.0
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: whisper-medium-ko-1195h
    results: []

whisper-medium-ko-1195h

This model is a fine-tuned version of openai/whisper-medium on the None dataset. It achieves the following results on the evaluation set:

Model description

The model was trained to transcript the audio sources into Korean text.

Intended uses & limitations

More information needed

Training and evaluation data

I downloaded all data from AI-HUB (https://aihub.or.kr/). Two datasets, in particular, caught my attention: "Instruction Audio Set" and "Noisy Conversation Audio Set". I intentionally gathered 796 hours of audio from the first dataset and 363 hours of audio from the second dataset (This includes statistics for the training data only, and excludes information about the validation data.).

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 10
  • eval_batch_size: 10
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 59151
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0782 0.33 6572 0.1833 10.9268
0.07 0.67 13144 0.1680 10.3611
0.0605 1.0 19716 0.1600 9.9357
0.0345 1.33 26288 0.1573 9.4492
0.0365 1.67 32860 0.1518 9.3395
0.0339 2.0 39432 0.1478 8.9811
0.0176 2.33 46004 0.1596 9.1702
0.0159 2.67 52576 0.1572 8.6746
0.0141 3.0 59148 0.1552 8.6411

Framework versions

  • Transformers 4.28.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.11.0
  • Tokenizers 0.13.2