Meta_LLama3_Arabic / README.md
MahmoudIbrahim's picture
Update README.md
4d20c49 verified
metadata
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
library_name: peft
datasets:
  - Yasbok/Alpaca_arabic_instruct
language:
  - ar
pipeline_tag: text-generation
tags:
  - finance

Meta_LLama3_Arabic

Meta_LLama3_Arabic is a fine-tuned version of Meta's LLaMa model, specialized for Arabic language tasks. This model has been designed for a variety of NLP tasks including text generation,and language comprehension in Arabic.

Model Details

  • Model Name: Meta_LLama3_Arabic
  • Base Model: LLaMa
  • Languages: Arabic
  • Tasks: Text Generation,Language Understanding
  • Quantization: [Specify if it’s quantized, e.g., 4-bit quantization with bitsandbytes, or float32]

Installation

To use this model, you need the unsloth andtransformers library from Hugging Face. You can install it as follows:

! pip install transformers unsloth

how to use :

alpaca_prompt = """فيما يلي تعليمات تصف مهمة، إلى جانب مدخل يوفر سياقاً إضافياً. اكتب استجابة تُكمل الطلب بشكل مناسب.

### التعليمات:
{}

### المدخل:
{}

### الاستجابة:
{}"""


from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "MahmoudIbrahim/Meta_LLama3_Arabic", # YOUR MODEL YOU USED FOR TRAINING
      max_seq_length = 2048,
      dtype = None,
      load_in_4bit = True,
    )

#FastLanguageModel.for_inference(model) # Enable native 2x faster inference
# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
    alpaca_prompt.format(
        "    ماذا تعرف عن الحضاره المصريه ", # instruction

        "  القديمة",
        "",# output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens =150)