File size: 2,051 Bytes
be10340 36e1121 4269725 c9401b9 319a649 be10340 8179663 4d20c49 8179663 808cb17 8179663 be10340 87ad8c9 44ddcbf be5153c a67a0a9 be5153c a042ff2 87ad8c9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
library_name: peft
datasets:
- Yasbok/Alpaca_arabic_instruct
language:
- ar
pipeline_tag: text-generation
tags:
- finance
---
# Meta_LLama3_Arabic
**Meta_LLama3_Arabic** is a fine-tuned version of Meta's LLaMa model, specialized for Arabic language tasks. This model has been designed for a variety of NLP tasks including text generation,and language comprehension in Arabic.
## Model Details
- **Model Name**: Meta_LLama3_Arabic
- **Base Model**: LLaMa
- **Languages**: Arabic
- **Tasks**: Text Generation,Language Understanding
- **Quantization**: [Specify if it’s quantized, e.g., 4-bit quantization with `bitsandbytes`, or float32]
## Installation
To use this model, you need the `unsloth` and`transformers` library from Hugging Face. You can install it as follows:
```bash
! pip install transformers unsloth
```
how to use :
```python
alpaca_prompt = """فيما يلي تعليمات تصف مهمة، إلى جانب مدخل يوفر سياقاً إضافياً. اكتب استجابة تُكمل الطلب بشكل مناسب.
### التعليمات:
{}
### المدخل:
{}
### الاستجابة:
{}"""
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "MahmoudIbrahim/Meta_LLama3_Arabic", # YOUR MODEL YOU USED FOR TRAINING
max_seq_length = 2048,
dtype = None,
load_in_4bit = True,
)
#FastLanguageModel.for_inference(model) # Enable native 2x faster inference
# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
alpaca_prompt.format(
" ماذا تعرف عن الحضاره المصريه ", # instruction
" القديمة",
"",# output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens =150)
``` |