Edit model card

Meta_LLama3_Arabic

Meta_LLama3_Arabic is a fine-tuned version of Meta's LLaMa model, specialized for Arabic language tasks. This model has been designed for a variety of NLP tasks including text generation,and language comprehension in Arabic.

Model Details

  • Model Name: Meta_LLama3_Arabic
  • Base Model: LLaMa
  • Languages: Arabic
  • Tasks: Text Generation,Language Understanding
  • Quantization: [Specify if it’s quantized, e.g., 4-bit quantization with bitsandbytes, or float32]

Installation

To use this model, you need the unsloth andtransformers library from Hugging Face. You can install it as follows:

! pip install transformers unsloth

how to use :

alpaca_prompt = """فيما يلي تعليمات تصف مهمة، إلى جانب مدخل يوفر سياقاً إضافياً. اكتب استجابة تُكمل الطلب بشكل مناسب.

### التعليمات:
{}

### المدخل:
{}

### الاستجابة:
{}"""


from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "MahmoudIbrahim/Meta_LLama3_Arabic", # YOUR MODEL YOU USED FOR TRAINING
      max_seq_length = 2048,
      dtype = None,
      load_in_4bit = True,
    )

#FastLanguageModel.for_inference(model) # Enable native 2x faster inference
# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
    alpaca_prompt.format(
        "    ماذا تعرف عن الحضاره المصريه ", # instruction

        "  القديمة",
        "",# output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens =150)
Downloads last month
166
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train MahmoudIbrahim/Meta_LLama3_Arabic