File size: 1,209 Bytes
3390a2d a0db2b1 3390a2d a0db2b1 3390a2d a0db2b1 3390a2d a0db2b1 3390a2d a0db2b1 3390a2d a0db2b1 3390a2d a0db2b1 3390a2d a0db2b1 3390a2d a0db2b1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
language: "ar"
pipeline_tag: automatic-speech-recognition
tags:
- CTC
- Attention
- pytorch
- Transformer
license: "cc-by-nc-4.0"
datasets:
- MGB-3
- egyptian-arabic-conversational-speech-corpus
metrics:
- wer
model-index:
- name: omarxadel/hubert-large-arabic-egyptian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test WER
type: wer
value: 29.3755
- name: Validation WER
type: wer
value: 29.1828
---
# Wav2Vec2-XLSR-53 - with CTC fine-tuned on MGB-3 and Egyptian Arabic Conversational Speech Corpus (No LM)
This model is a fine-tuned version of [Wav2Vec2-XLSR-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53). We finetuned this model on the MGB-3 and Egyptian Arabic Conversational Speech Corpus datasets, acheiving WER of `29.3755%`.
The performance of the model on the datasets is the following:
| Valid WER | Test WER |
|:---------:|:--------:|
| 29.18 | 29.37 |
# Acknowledgement
Model fine-tuning and data processing for this work were performed as a part of a Graduation Project from Faculty of Engineering, Alexandria University, CCE Program. |