File size: 3,544 Bytes
d1f35b5 7d728c1 d1f35b5 7d728c1 d1f35b5 7d728c1 d1f35b5 7d728c1 d1f35b5 abb9293 d1f35b5 7d728c1 d1f35b5 abb9293 d1f35b5 abb9293 69882ca 7d728c1 abb9293 7d728c1 69882ca d1f35b5 abb9293 d1f35b5 abb9293 d1f35b5 abb9293 d1f35b5 f1133c3 d1f35b5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
---
language:
- eo
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_13_0
- generated_from_trainer
datasets:
- common_voice_13_0
metrics:
- wer
model-index:
- name: wav2vec2-common_voice_13_0-eo-10
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: MOZILLA-FOUNDATION/COMMON_VOICE_13_0 - EO
type: common_voice_13_0
config: eo
split: validation
args: 'Config: eo, Training split: train, Eval split: validation'
metrics:
- name: Wer
type: wer
value: 0.06566915357190017
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice_13_0-eo-10
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_13_0 - EO dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0454
- Cer: 0.0118
- Wer: 0.0657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:------:|:---------------:|:------:|
| 2.9894 | 0.22 | 1000 | 1.0 | 2.9257 | 1.0 |
| 0.7104 | 0.44 | 2000 | 0.0457 | 0.2129 | 0.2538 |
| 0.2853 | 0.67 | 3000 | 0.0274 | 0.1109 | 0.1583 |
| 0.2327 | 0.89 | 4000 | 0.0231 | 0.0909 | 0.1320 |
| 0.1917 | 1.11 | 5000 | 0.0206 | 0.0775 | 0.1188 |
| 0.1803 | 1.33 | 6000 | 0.0184 | 0.0698 | 0.1055 |
| 0.1661 | 1.56 | 7000 | 0.0169 | 0.0645 | 0.0961 |
| 0.1635 | 1.78 | 8000 | 0.0170 | 0.0639 | 0.0964 |
| 0.1555 | 2.0 | 9000 | 0.0156 | 0.0592 | 0.0881 |
| 0.1386 | 2.22 | 10000 | 0.0147 | 0.0559 | 0.0821 |
| 0.1338 | 2.45 | 11000 | 0.0146 | 0.0548 | 0.0831 |
| 0.1307 | 2.67 | 12000 | 0.0137 | 0.0529 | 0.0759 |
| 0.1297 | 2.89 | 13000 | 0.0134 | 0.0504 | 0.0745 |
| 0.1201 | 3.11 | 14000 | 0.0131 | 0.0499 | 0.0734 |
| 0.1152 | 3.34 | 15000 | 0.0128 | 0.0484 | 0.0712 |
| 0.1144 | 3.56 | 16000 | 0.0125 | 0.0477 | 0.0695 |
| 0.1179 | 3.78 | 17000 | 0.0122 | 0.0468 | 0.0679 |
| 0.1112 | 4.0 | 18000 | 0.0121 | 0.0468 | 0.0676 |
| 0.1141 | 4.23 | 19000 | 0.0121 | 0.0462 | 0.0668 |
| 0.1085 | 4.45 | 20000 | 0.0119 | 0.0458 | 0.0664 |
| 0.105 | 4.67 | 21000 | 0.0119 | 0.0456 | 0.0660 |
| 0.1072 | 4.89 | 22000 | 0.0119 | 0.0454 | 0.0658 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|