File size: 3,103 Bytes
6c36d79 027c37e 6c36d79 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
---
base_model:
- nazimali/Mistral-Nemo-Kurdish
language:
- ku
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
datasets:
- saillab/alpaca-kurdish_kurmanji-cleaned
library_name: transformers
---
This is a 12B parameter model, finetuned on `nazimali/Mistral-Nemo-Kurdish` for a single Kurdish (Kurmanji) instruction dataset. My intention was to train this with both Kurdish Kurmanji Latin script and Kurdish Sorani Arabic script, but training time was much longer than anticipated.
So I decided to use 1 full Kurdish Kurmanji dataset to get started.
Will look into a multi-GPU training setup so don't have to wait all day for results. Want to train it with both Kurmanji and Sorani Arabic script.
Try [spaces demo](https://huggingface.co/spaces/nazimali/Mistral-Nemo-Kurdish-Instruct) example.
### Example usage
#### llama-cpp-python
```python
from llama_cpp import Llama
inference_prompt = """Li jêr rêwerzek heye ku peywirek rave dike, bi têketinek ku çarçoveyek din peyda dike ve tê hev kirin. Bersivek ku daxwazê bi guncan temam dike binivîsin.
### Telîmat:
{}
### Têketin:
{}
### Bersiv:
"""
llm = Llama.from_pretrained(
repo_id="nazimali/Mistral-Nemo-Kurdish-Instruct",
filename="Q4_K_M.gguf",
)
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": inference_prompt.format("selam alikum, tu çawa yî?")
}
]
)
```
#### llama.cpp
```shell
./llama-cli \
--hf-repo "nazimali/Mistral-Nemo-Kurdish-Instruct" \
--hf-file Q4_K_M.gguf \
-p "selam alikum, tu çawa yî?" \
--conversation
```
#### Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "nazimali/Mistral-Nemo-Kurdish-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=bnb_config,
device_map="auto",
)
```
### Training
Transformers `4.44.2`
1 NVIDIA A40
Duration 7h 41m 12s
```json
{
"total_flos": 2225817933447045000,
"train/epoch": 0.9998075072184792,
"train/global_step": 2597,
"train/grad_norm": 1.172538161277771,
"train/learning_rate": 0,
"train/loss": 0.7774,
"train_loss": 0.892096030377038,
"train_runtime": 27479.3172,
"train_samples_per_second": 1.512,
"train_steps_per_second": 0.095
}
```
#### Finetuning data:
- `saillab/alpaca-kurdish_kurmanji-cleaned`
- Dataset number of rows: 52,002
- Filtered columns `instruction, output`
- Must have at least 1 character
- Must be less than 10,000 characters
- Number of rows used for training: 41,559
#### Finetuning instruction format:
```python
finetune_prompt = """Li jêr rêwerzek heye ku peywirek rave dike, bi têketinek ku çarçoveyek din peyda dike ve tê hev kirin. Bersivek ku daxwazê bi guncan temam dike binivîsin.
### Telîmat:
{}
### Têketin:
{}
### Bersiv:
{}
"""
``` |