Edit model card

Hamsa-v0.1-beta

Model description

Hamsa (همسة) represents a sophisticated advancement in the realm of Arabic speech recognition. It's a pre-trained automatic speech recognition (ASR) model that is built upon the foundation of the Whisper model. Hamsa is not just a technological achievement; it's a testament to NADSOFT's commitment to elevating the standards of AI results for the Arabic language. This contribution is especially significant for the Middle East and North Africa (MENA) region and the broader Arab World, as it seeks to address the unique linguistic nuances and cater to the specific needs of these communities.

Intended uses & limitations

Hamsa is a model that is still under development, and it is important to be aware of its limitations. For example, the model may not be able to accurately transcribe text from speakers with very strong accents, such as Moroccan Arabic. Additionally, the model may have difficulty transcribing text from noisy recordings.

It is important to note that Hamsa is not a perfect model, and it should not be used to generate text that is intended to be used in legal, medical, or other sensitive contexts.

Training and evaluation data

nadsoft/Jordan-Audio | google/fleurs | mozilla-foundation/common_voice_11_0

WER = 18.22

Training procedure

Training hyperparameters

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000 then 4000 for NADSOFT data
  • mixed_precision_training: Native AMP
Downloads last month
7
Safetensors
Model size
764M params
Tensor type
F32
·
Inference API
or
This model can be loaded on Inference API (serverless).

Dataset used to train nadsoft/hamsa-v0.1-beta