سەیری ڕێکخستنی ڕاهێنانی فرە GPU دەکات بۆیە پێویست ناکات بە درێژایی ڕۆژ چاوەڕێی ئەنجامەکان بکەیت. دەتەوێت بە هەردوو ڕێنووسی عەرەبی کرمانجی و سۆرانی ڕاهێنانی پێبکەیت.
نموونەی دیمۆی بۆشاییەکان تاقی بکەرەوە.
This is a 12B parameter model, finetuned on nazimali/Mistral-Nemo-Kurdish
for a single Kurdish (Kurmanji) instruction dataset. My intention was to train this with both Kurdish Kurmanji Latin script and Kurdish Sorani Arabic script, but training time was much longer than anticipated.
So I decided to use 1 full Kurdish Kurmanji dataset to get started.
Will look into a multi-GPU training setup so don't have to wait all day for results. Want to train it with both Kurmanji and Sorani Arabic script.
Try spaces demo example.
Example usage
llama-cpp-python
from llama_cpp import Llama
inference_prompt = """Li jêr rêwerzek heye ku peywirek rave dike, bi têketinek ku çarçoveyek din peyda dike ve tê hev kirin. Bersivek ku daxwazê bi guncan temam dike binivîsin.
### Telîmat:
{}
### Têketin:
{}
### Bersiv:
"""
llm = Llama.from_pretrained(
repo_id="nazimali/Mistral-Nemo-Kurdish-Instruct",
filename="Q4_K_M.gguf",
)
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": inference_prompt.format("سڵاو ئەلیکوم، چۆنیت؟")
}
]
)
llama.cpp
./llama-cli \
--hf-repo "nazimali/Mistral-Nemo-Kurdish-Instruct" \
--hf-file Q4_K_M.gguf \
-p "selam alikum, tu çawa yî?" \
--conversation
Transformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
infer_prompt = """Li jêr rêwerzek heye ku peywirek rave dike, bi têketinek ku çarçoveyek din peyda dike ve tê hev kirin. Bersivek ku daxwazê bi guncan temam dike binivîsin.
### Telîmat:
{}
### Têketin:
{}
### Bersiv:
"""
model_id = "nazimali/Mistral-Nemo-Kurdish-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=bnb_config,
device_map="auto",
)
model.eval()
def call_llm(user_input, instructions=None):
instructions = instructions or "tu arîkarek alîkar î"
prompt = infer_prompt.format(instructions, user_input)
input_ids = tokenizer(
prompt,
return_tensors="pt",
add_special_tokens=False,
return_token_type_ids=False,
).to("cuda")
with torch.inference_mode():
generated_ids = model.generate(
**input_ids,
max_new_tokens=120,
do_sample=True,
temperature=0.7,
top_p=0.7,
num_return_sequences=1,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
decoded_output = tokenizer.batch_decode(generated_ids)[0]
return decoded_output.replace(prompt, "").replace("</s>", "")
response = call_llm("سڵاو ئەلیکوم، چۆنیت؟")
print(response)
Training
Transformers 4.44.2
1 NVIDIA A40
Duration 7h 41m 12s
{
"total_flos": 2225817933447045000,
"train/epoch": 0.9998075072184792,
"train/global_step": 2597,
"train/grad_norm": 1.172538161277771,
"train/learning_rate": 0,
"train/loss": 0.7774,
"train_loss": 0.892096030377038,
"train_runtime": 27479.3172,
"train_samples_per_second": 1.512,
"train_steps_per_second": 0.095
}
Finetuning data:
saillab/alpaca-kurdish_kurmanji-cleaned
- Dataset number of rows: 52,002
- Filtered columns
instruction, output
- Must have at least 1 character
- Must be less than 10,000 characters
- Number of rows used for training: 41,559
Finetuning instruction format:
finetune_prompt = """Li jêr rêwerzek heye ku peywirek rave dike, bi têketinek ku çarçoveyek din peyda dike ve tê hev kirin. Bersivek ku daxwazê bi guncan temam dike binivîsin.
### Telîmat:
{}
### Têketin:
{}
### Bersiv:
{}
"""
- Downloads last month
- 261
Model tree for nazimali/Mistral-Nemo-Kurdish-Instruct
Base model
mistralai/Mistral-Nemo-Base-2407