KillerShoaib commited on
Commit
3c6a9a6
·
verified ·
1 Parent(s): 83ddcc8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +147 -7
README.md CHANGED
@@ -1,22 +1,162 @@
1
  ---
2
- base_model: KillerShoaib/gemma-2-9b-bangla-16bit
3
  language:
4
- - en
5
  license: apache-2.0
6
  tags:
7
  - text-generation-inference
8
  - transformers
9
  - unsloth
10
- - gemma2
11
  - trl
 
 
12
  ---
13
 
14
- # Uploaded model
 
 
 
 
 
15
 
16
  - **Developed by:** KillerShoaib
17
  - **License:** apache-2.0
18
- - **Finetuned from model :** KillerShoaib/gemma-2-9b-bangla-16bit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
- This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
1
  ---
 
2
  language:
3
+ - bn
4
  license: apache-2.0
5
  tags:
6
  - text-generation-inference
7
  - transformers
8
  - unsloth
9
+ - llama
10
  - trl
11
+ base_model: unsloth/llama-3-8b-bnb-4bit
12
+ inference: false
13
  ---
14
 
15
+ # Gemma-2 Bangla 4 bit
16
+
17
+ <div align="center">
18
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65ca6f0098a46a56261ac3ac/k4MH-skEIgS_zHXyAxtXv.jpeg" width="300"/>
19
+
20
+ </div>
21
 
22
  - **Developed by:** KillerShoaib
23
  - **License:** apache-2.0
24
+ - **Finetuned from model :** unsloth/gemma-2-9b-it-bnb-4bit
25
+ - **Datset used for fine-tuning :** iamshnoo/alpaca-cleaned-bengali
26
+
27
+
28
+ # 4-bit Quantization
29
+ **This is 4-bit quantization of Gemma-2 9b model.**
30
+
31
+
32
+ # Gemma-2 Bangla Different Formats
33
+
34
+ - `LoRA Adapters only` - [**KillerShoaib/gemma-2-9b-bangla-lora**](https://huggingface.co/KillerShoaib/gemma-2-9b-bangla-lora)
35
+ - `16-bit model` - [**KillerShoaib/gemma-2-9b-bangla-16bit**](https://huggingface.co/KillerShoaib/gemma-2-9b-bangla-16bit)
36
+
37
+ # Model Details
38
+
39
+ **Gemma 2 9 billion** model was finetuned using **unsloth** package on a **cleaned Bangla alpaca** dataset. The model was finetune using **LoRA** finetuning technique. The model is finetuned for **1 epoch** on a single T4 GPU.
40
+
41
+
42
+ # Pros & Cons of the Model
43
+
44
+ ## Pros
45
+
46
+ - **The model can comprehend the Bangla language, including its semantic nuances**
47
+ - **Given context model can answer the question based on the context**
48
+
49
+ ## Cons
50
+ - **Model is unable to do creative or complex work. i.e: creating a poem or solving a math problem in Bangla**
51
+ - **Since the size of the dataset was small, the model lacks lot of general knowledge in Bangla**
52
+
53
+ ## Llama 3 8b Bangla Vs Gemma 2 9b Bangla
54
+ - **Overall both performace were similar, the pros and cons both were same, both the model struggle with longer context query and reasoning, but able to answer question if the context is given**
55
+ -
56
+
57
+
58
+ # Run The Model
59
+
60
+ ## Download the right dependency for Unsloth
61
+
62
+ **Google Colab**
63
+
64
+ ```python
65
+ %%capture
66
+ # Installs Unsloth, Xformers (Flash Attention) and all other packages!
67
+ !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
68
+
69
+ # We have to check which Torch version for Xformers (2.3 -> 0.0.27)
70
+ from torch import __version__; from packaging.version import Version as V
71
+ xformers = "xformers==0.0.27" if V(__version__) < V("2.4.0") else "xformers"
72
+ !pip install --no-deps {xformers} trl peft accelerate bitsandbytes triton
73
+ ```
74
+
75
+ ## FastLanguageModel from unsloth for 2x faster inference
76
+
77
+ ```python
78
+
79
+ from unsloth import FastLanguageModel
80
+ model, tokenizer = FastLanguageModel.from_pretrained(
81
+ model_name = "KillerShoaib/gemma-2-9b-bangla-lora", # or KillerShoaib/gemma-2-9b-bangla-4bit / KillerShoaib/gemma-2-9b-bangla-4bit
82
+ max_seq_length = 2048,
83
+ dtype = None,
84
+ load_in_4bit = True,
85
+ )
86
+ FastLanguageModel.for_inference(model)
87
+
88
+ # alpaca_prompt for the model
89
+ alpaca_prompt = """Below is an instruction in bangla that describes a task, paired with an input also in bangla that provides further context. Write a response in bangla that appropriately completes the request.
90
+
91
+ ### Instruction:
92
+ {}
93
+
94
+ ### Input:
95
+ {}
96
+
97
+ ### Response:
98
+ {}"""
99
+
100
+ # input with instruction and input
101
+ inputs = tokenizer(
102
+ [
103
+ alpaca_prompt.format(
104
+ "সুস্থ থাকার তিনটি উপায় বলুন", # instruction
105
+ "", # input
106
+ "", # output - leave this blank for generation!
107
+ )
108
+ ], return_tensors = "pt").to("cuda")
109
+
110
+ # generating the output and decoding it
111
+ outputs = model.generate(**inputs, max_new_tokens = 2048, use_cache = True)
112
+ tokenizer.batch_decode(outputs)
113
+ ```
114
+
115
+ ## AutoModelForCausalLM from Hugginface
116
+
117
+ ```python
118
+ from transformers import AutoTokenizer, AutoModelForCausalLM
119
+
120
+ model_name = "KillerShoaib/gemma-2-9b-bangla-4bit" # or KillerShoaib/gemma-2-9b-bangla-16bit
121
+ tokenizer_name = model_name
122
+
123
+ # Load tokenizer
124
+ tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
125
+ # Load model
126
+ model = AutoModelForCausalLM.from_pretrained(model_name)
127
+
128
+ # Text prompt to start generation
129
+ alpaca_prompt = """Below is an instruction in bangla that describes a task, paired with an input also in bangla that provides further context. Write a response in bangla that appropriately completes the request.
130
+
131
+ ### Instruction:
132
+ {}
133
+
134
+ ### Input:
135
+ {}
136
+
137
+ ### Response:
138
+ {}"""
139
+
140
+ # Encode the prompt text
141
+ inputs = tokenizer(
142
+ [
143
+ alpaca_prompt.format(
144
+ "বিশ্বের সবচেয়ে বিখ্যাত চিত্রশিল্পী কে?", # instruction
145
+ "", # input
146
+ "", # output - leave this blank for generation!
147
+ )
148
+ ], return_tensors = "pt").to("cuda")
149
+
150
+ # output
151
+ outputs = model.generate(**inputs, max_new_tokens = 1024, use_cache = True)
152
+
153
+ tokenizer.batch_decode(outputs)
154
+ ```
155
+
156
+
157
+ # Inference Script & Github Repo
158
 
159
+ - `Google Colab` - [**Gemma-2 9b Bangla Inference Script**](https://colab.research.google.com/drive/13urCM6hQ2zE401uwu4laS9czi0K6qFUz?usp=sharing)
160
+ - `Github Repo` - [**Llama-3 Bangla**](https://github.com/KillerShoaib/Llama-3-Bangla)
161
 
162
+ **‼️ Github Repo** shows how to finetune any unsloth model using incremental training. For **Gemma 2 9b** model finetuning I've used the exact **same logic** that was used for **Llama 3 8b** model. **Remember to change the dependency based on the Unsloth notebook example. ‼️**