NX2411's picture
update model card README.md
7bb6a71
|
raw
history blame
4.49 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
model-index:
  - name: wav2vec2-large-xlsr-korean-demo-test2
    results: []

wav2vec2-large-xlsr-korean-demo-test2

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9948
  • Wer: 0.5865

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
29.8545 0.3 400 5.3860 1.0
4.9621 0.59 800 5.4067 1.0
4.9254 0.89 1200 5.1930 1.0
4.8425 1.19 1600 5.0176 1.0
4.7955 1.49 2000 5.0994 1.0
4.7091 1.78 2400 4.6204 1.0
4.4177 2.08 2800 3.8672 1.0
3.5708 2.38 3200 2.8938 0.9548
2.9828 2.67 3600 2.4027 0.9100
2.6781 2.97 4000 2.0710 0.8728
2.3347 3.27 4400 1.8604 0.8474
2.2081 3.57 4800 1.7831 0.8116
2.1184 3.86 5200 1.6272 0.8012
1.9834 4.16 5600 1.5311 0.8007
1.8402 4.46 6000 1.4352 0.7659
1.7859 4.75 6400 1.3503 0.7485
1.7374 5.05 6800 1.3561 0.7674
1.5966 5.35 7200 1.3319 0.7222
1.5716 5.65 7600 1.2539 0.7112
1.579 5.94 8000 1.2456 0.7028
1.4429 6.24 8400 1.2081 0.6884
1.4176 6.54 8800 1.1681 0.6914
1.403 6.84 9200 1.1583 0.6874
1.3417 7.13 9600 1.1235 0.6590
1.267 7.43 10000 1.1538 0.6720
1.268 7.73 10400 1.0878 0.6556
1.2245 8.02 10800 1.0759 0.6347
1.1437 8.32 11200 1.0815 0.6412
1.1386 8.62 11600 1.1007 0.6352
1.1045 8.92 12000 1.0574 0.6521
1.0533 9.21 12400 1.0772 0.6332
1.0274 9.51 12800 1.0622 0.6267
1.0398 9.81 13200 1.0380 0.6322
0.9869 10.1 13600 1.0654 0.6267
0.9309 10.4 14000 1.0505 0.6153
0.9231 10.7 14400 1.0300 0.6128
0.9324 11.0 14800 0.9777 0.6098
0.8467 11.29 15200 1.0123 0.6133
0.8471 11.59 15600 1.0086 0.6014
0.8601 11.89 16000 1.0051 0.6004
0.8111 12.18 16400 1.0242 0.5994
0.7525 12.48 16800 1.0015 0.5875
0.7697 12.78 17200 0.9987 0.5954
0.7585 13.08 17600 1.0040 0.5949
0.7163 13.37 18000 0.9584 0.5895
0.7041 13.67 18400 0.9795 0.5885
0.7115 13.97 18800 0.9726 0.5840
0.6907 14.26 19200 0.9809 0.5855
0.6847 14.56 19600 0.9979 0.5870
0.6641 14.86 20000 0.9948 0.5865

Framework versions

  • Transformers 4.21.1
  • Pytorch 1.12.1+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1