--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-korean-all results: [] --- # wav2vec2-large-xls-r-korean-all This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1535 - Cer: 0.0329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.0206 | 0.36 | 500 | 3.3589 | 0.9871 | | 0.6381 | 0.72 | 1000 | 0.6371 | 0.1714 | | 0.3951 | 1.08 | 1500 | 0.4320 | 0.125 | | 0.2858 | 1.44 | 2000 | 0.3546 | 0.1056 | | 0.2545 | 1.8 | 2500 | 0.2925 | 0.0872 | | 0.1833 | 2.16 | 3000 | 0.2520 | 0.0743 | | 0.1898 | 2.51 | 3500 | 0.2386 | 0.0679 | | 0.1981 | 2.87 | 4000 | 0.2135 | 0.0631 | | 0.123 | 3.23 | 4500 | 0.2129 | 0.0576 | | 0.1221 | 3.59 | 5000 | 0.2013 | 0.0543 | | 0.1218 | 3.95 | 5500 | 0.2000 | 0.0554 | | 0.1096 | 4.31 | 6000 | 0.1884 | 0.0507 | | 0.1113 | 4.67 | 6500 | 0.1781 | 0.0455 | | 0.075 | 5.03 | 7000 | 0.1811 | 0.0458 | | 0.0922 | 5.39 | 7500 | 0.1748 | 0.0455 | | 0.0766 | 5.75 | 8000 | 0.1807 | 0.0434 | | 0.0811 | 6.11 | 8500 | 0.1699 | 0.0411 | | 0.0876 | 6.47 | 9000 | 0.1641 | 0.0398 | | 0.0913 | 6.82 | 9500 | 0.1632 | 0.0392 | | 0.0658 | 7.18 | 10000 | 0.1667 | 0.0388 | | 0.0831 | 7.54 | 10500 | 0.1613 | 0.0375 | | 0.0716 | 7.9 | 11000 | 0.1552 | 0.0361 | | 0.0485 | 8.26 | 11500 | 0.1534 | 0.0351 | | 0.0469 | 8.62 | 12000 | 0.1541 | 0.0343 | | 0.0503 | 8.98 | 12500 | 0.1497 | 0.0340 | | 0.041 | 9.34 | 13000 | 0.1535 | 0.0337 | | 0.0556 | 9.7 | 13500 | 0.1535 | 0.0329 | ### Framework versions - Transformers 4.33.2 - Pytorch 1.12.1+cu113 - Datasets 2.14.5 - Tokenizers 0.13.3