--- language: - ur license: apache-2.0 tags: - automatic-speech-recognition - mozilla-foundation/common_voice_8_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 0.9580 - Wer: 0.6520 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 9.5036 | 1.96 | 100 | 4.0538 | 1.0 | | 3.3669 | 3.92 | 200 | 3.2041 | 1.0 | | 3.1499 | 5.88 | 300 | 3.1220 | 1.0 | | 3.0271 | 7.84 | 400 | 2.9935 | 0.9970 | | 2.9565 | 9.8 | 500 | 2.9357 | 0.9993 | | 2.9184 | 11.76 | 600 | 2.9165 | 0.9963 | | 2.8832 | 13.73 | 700 | 2.8762 | 0.9911 | | 2.8407 | 15.69 | 800 | 2.8102 | 0.9970 | | 2.7007 | 17.65 | 900 | 2.4364 | 0.9963 | | 2.4206 | 19.61 | 1000 | 1.9852 | 0.9421 | | 2.0699 | 21.57 | 1100 | 1.4849 | 0.8343 | | 1.8311 | 23.53 | 1200 | 1.3084 | 0.7801 | | 1.7127 | 25.49 | 1300 | 1.2040 | 0.7446 | | 1.6239 | 27.45 | 1400 | 1.1359 | 0.7280 | | 1.5654 | 29.41 | 1500 | 1.0688 | 0.7159 | | 1.4965 | 31.37 | 1600 | 1.0520 | 0.6985 | | 1.445 | 33.33 | 1700 | 1.0314 | 0.6878 | | 1.4095 | 35.29 | 1800 | 1.0063 | 0.6712 | | 1.3853 | 37.25 | 1900 | 0.9848 | 0.6701 | | 1.3558 | 39.22 | 2000 | 0.9738 | 0.6731 | | 1.3415 | 41.18 | 2100 | 0.9656 | 0.6646 | | 1.3102 | 43.14 | 2200 | 0.9632 | 0.6557 | | 1.309 | 45.1 | 2300 | 0.9496 | 0.6557 | | 1.2993 | 47.06 | 2400 | 0.9609 | 0.6550 | | 1.2695 | 49.02 | 2500 | 0.9604 | 0.6542 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0