Edit model card

Motivation :

The goal of the project was to adapt large language models for the Arabic language and create a new state-of-the-art Arabic LLM. Due to the scarcity of Arabic instruction fine-tuning data, not many LLMs have been trained specifically in Arabic, which is surprising given the large number of Arabic speakers.
Our final model was trained on a high-quality instruction fine-tuning (IFT) dataset, generated synthetically and then evaluated using the Hugging Face Arabic leaderboard.

Training :

This model is the 2B version. It was trained for 2 days on 1 A100 GPU using LoRA with a rank of 128, a learning rate of 1e-4, and a cosine learning rate schedule.

Evaluation :

Metric Slim205/Barka-2b-it
Average 46.98
ACVA 39.5
AlGhafa 46.5
MMLU 37.06
EXAMS 38.73
ARC Challenge 35.78
ARC Easy 36.97
BOOLQ 73.77
COPA 50
HELLAWSWAG 28.98
OPENBOOK QA 43.84
PIQA 56.36
RACE 36.19
SCIQ 55.78
TOXIGEN 78.29

Please refer to https://github.com/Slim205/Arabicllm/ for more details.

Downloads last month
780
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Slim205/Barka-2b-it

Base model

google/gemma-2-2b
Finetuned
(99)
this model

Dataset used to train Slim205/Barka-2b-it