Mistral_Nemo_Arabic / README.md
MahmoudIbrahim's picture
Update README.md
86c9ef1 verified
---
base_model: unsloth/Mistral-Nemo-Base-2407-bnb-4bit
library_name: transformers
language:
- ar
pipeline_tag: text-generation
datasets:
- MahmoudIbrahim/Arabic_NVIDIA
---
- **Developed by:** Mahmoud Ibrahim
-
**How to use :**
``` bush
! pip install transformers bitsandbytes
```
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from IPython.display import Markdown
import textwrap
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("MahmoudIbrahim/Mistral_12b_Arabic")
model = AutoModelForCausalLM.from_pretrained("MahmoudIbrahim/Mistral_12b_Arabic",load_in_4bit =True)
alpaca_prompt = """فيما يلي تعليمات تصف مهمة، إلى جانب مدخل يوفر سياقاً إضافياً. اكتب استجابة تُكمل الطلب بشكل مناسب.
### التعليمات:
{}
### الاستجابة:
{}"""
# Format the prompt with instruction and an empty output placeholder
formatted_prompt = alpaca_prompt.format(
"كيف يمكن للحكومة المصرية والمجتمع ككل أن يعززوا من قدرة البلاد على تحقيق التنمية المستدامة؟ " , # instruction
"" # Leave output blank for generation
)
# Tokenize the formatted string directly
input_ids = tokenizer.encode(formatted_prompt, return_tensors="pt") # Use 'cuda' if you want to run on GPU
def to_markdown(text):
text = text.replace('•','*')
return Markdown(textwrap.indent(text, '>', predicate=lambda _: True))
# Generate text
output = model.generate(
input_ids,
max_length=128, # Adjust max length as needed
num_return_sequences=1, # Number of generated responses
no_repeat_ngram_size=2, # Prevent repetition
top_k=50, # Filter to top-k tokens
top_p=0.9, # Use nucleus sampling
temperature=0.7 , # Control creativity level
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
to_markdown(generated_text)
```
**The model response :**
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f36b5377b0eb97ea124e32/DPdKT-kQiDtfulJ-qQ8DX.png)