--- tags: - generated_from_trainer datasets: - fleurs metrics: - wer model-index: - name: microsoft-wavlm-fleurs-ur results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: fleurs type: fleurs config: ur_pk split: test args: ur_pk metrics: - name: Wer type: wer value: 0.4026467344688151 --- # microsoft-wavlm-fleurs-ur This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on the fleurs dataset. It achieves the following results on the evaluation set: - Loss: 0.7294 - Wer: 0.4026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.911 | 0.35 | 100 | 3.7784 | 1.0 | | 3.0833 | 0.71 | 200 | 3.0964 | 1.0 | | 3.028 | 1.06 | 300 | 3.0377 | 1.0 | | 2.5114 | 1.41 | 400 | 2.4941 | 0.9922 | | 1.0583 | 1.77 | 500 | 1.0753 | 0.7579 | | 0.715 | 2.12 | 600 | 0.8524 | 0.6410 | | 0.6779 | 2.47 | 700 | 0.7711 | 0.6063 | | 0.6123 | 2.83 | 800 | 0.7170 | 0.5706 | | 0.8183 | 3.18 | 900 | 0.6897 | 0.5368 | | 0.5195 | 3.53 | 1000 | 0.6586 | 0.5303 | | 0.4774 | 3.89 | 1100 | 0.6306 | 0.5014 | | 0.4242 | 4.24 | 1200 | 0.6138 | 0.4817 | | 0.4549 | 4.59 | 1300 | 0.6027 | 0.4678 | | 0.2576 | 4.95 | 1400 | 0.5878 | 0.4600 | | 0.1578 | 5.3 | 1500 | 0.6144 | 0.4585 | | 0.3556 | 5.65 | 1600 | 0.5884 | 0.4582 | | 0.2427 | 6.01 | 1700 | 0.6071 | 0.4572 | | 0.267 | 6.36 | 1800 | 0.6303 | 0.4514 | | 0.2468 | 6.71 | 1900 | 0.6358 | 0.4495 | | 0.159 | 7.07 | 2000 | 0.6242 | 0.4312 | | 0.1527 | 7.42 | 2100 | 0.6372 | 0.4400 | | 0.1401 | 7.77 | 2200 | 0.6252 | 0.4292 | | 0.1211 | 8.13 | 2300 | 0.6358 | 0.4251 | | 0.1022 | 8.48 | 2400 | 0.6529 | 0.4356 | | 0.0818 | 8.83 | 2500 | 0.6773 | 0.4200 | | 0.0918 | 9.19 | 2600 | 0.6879 | 0.4267 | | 0.119 | 9.54 | 2700 | 0.6948 | 0.4254 | | 0.1615 | 9.89 | 2800 | 0.6920 | 0.4259 | | 0.0953 | 10.25 | 2900 | 0.7019 | 0.4218 | | 0.1008 | 10.6 | 3000 | 0.6933 | 0.4133 | | 0.0729 | 10.95 | 3100 | 0.6950 | 0.4164 | | 0.0636 | 11.31 | 3200 | 0.7151 | 0.4121 | | 0.0395 | 11.66 | 3300 | 0.7053 | 0.4098 | | 0.0391 | 12.01 | 3400 | 0.7081 | 0.3984 | | 0.0507 | 12.37 | 3500 | 0.7012 | 0.4111 | | 0.0598 | 12.72 | 3600 | 0.7169 | 0.4035 | | 0.0515 | 13.07 | 3700 | 0.7358 | 0.4102 | | 0.0429 | 13.43 | 3800 | 0.7236 | 0.4013 | | 0.0398 | 13.78 | 3900 | 0.7404 | 0.4026 | | 0.0946 | 14.13 | 4000 | 0.7285 | 0.4029 | | 0.0428 | 14.49 | 4100 | 0.7271 | 0.3991 | | 0.0329 | 14.84 | 4200 | 0.7294 | 0.4026 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1 - Datasets 2.8.0 - Tokenizers 0.13.2