File size: 2,322 Bytes
ff6d890
4563e71
 
ff6d890
 
4563e71
ff6d890
4563e71
ff6d890
4563e71
ff6d890
4563e71
ff6d890
4563e71
ff6d890
4563e71
ff6d890
4563e71
ff6d890
4563e71
ff6d890
4563e71
 
ff6d890
4563e71
ff6d890
4563e71
 
 
 
ff6d890
4563e71
 
 
 
 
 
 
ff6d890
4563e71
 
 
 
 
 
ff6d890
4563e71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ff6d890
4563e71
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
language:
- ms
---

# Full Parameter Finetuning 7B 32768 context length Gemma 2B on Malaysian instructions dataset

README at https://github.com/mesolitica/malaya/tree/5.1/session/gemma#instructions-2b-16384-context-length

We use exact Gemma Instruct chat template.

WandB, https://wandb.ai/huseinzol05/gemma-2B-8192-fpf-instructions-16k?workspace=user-huseinzol05

## Dataset

Dataset gathered at https://huggingface.co/collections/mesolitica/malaysian-synthetic-dataset-656c2673fe7fe0b1e9e25fe2

Notebook to prepare dataset at https://github.com/mesolitica/malaysian-dataset/blob/master/llm-instruction/combine-malay-no-alignment-multitasks-v6.ipynb

## Limitations

This model is a quick demonstration that the base model can be easily fine-tuned to achieve some performance. 
It does have minimal moderation mechanisms.

## how-to

```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
import json

TORCH_DTYPE = 'bfloat16'
nf4_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type='nf4',
    bnb_4bit_use_double_quant=True,
    bnb_4bit_compute_dtype=getattr(torch, TORCH_DTYPE)
)

tokenizer = AutoTokenizer.from_pretrained('mesolitica/gemma-2B-16k-instructions')
model = AutoModelForCausalLM.from_pretrained(
    'mesolitica/gemma-2B-16k-instructions',
    use_flash_attention_2 = True,
    quantization_config = nf4_config
)

messages = [
    {'role': 'user', 'content': 'kwsp tu apa'}
]
prompt = tokenizer.apply_chat_template(messages, tokenize = False)
inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda')
generate_kwargs = dict(
    inputs,
    max_new_tokens=1024,
    top_p=0.95,
    top_k=50,
    temperature=0.9,
    do_sample=True,
    num_beams=1,
)
r = model.generate(**generate_kwargs)
tokenizer.decode(r[0])
```

```text
<s> [INST] kwsp tu apa [/INST]KWSP bermaksud Kumpulan Wang Simpanan Pekerja. Ia adalah sebuah institusi simpanan persaraan yang ditubuhkan oleh Kementerian Kewangan Malaysia untuk tujuan mengumpul simpanan ahli untuk dibayar pada umur persaraan, penuh atau penuh persaraan penuh. KWSP ditubuhkan pada tahun 1951 dan mula beroperasi pada tahun 1952. KWSP adalah salah satu institusi simpanan persaraan terbesar di dunia, dengan pangkalan ahli sekitar 14 juta ahli.</s>
```