File size: 7,057 Bytes
4a473bb
e13a5d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0218dec
4a473bb
e13a5d7
4a473bb
 
e13a5d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4a473bb
e13a5d7
4a473bb
e13a5d7
4a473bb
e13a5d7
 
 
 
 
4a473bb
e13a5d7
 
4a473bb
e13a5d7
4a473bb
e13a5d7
 
4a473bb
e13a5d7
4a473bb
e13a5d7
 
4a473bb
e13a5d7
4a473bb
e13a5d7
 
 
 
 
 
 
 
 
 
 
 
 
 
4a473bb
e13a5d7
 
 
4a473bb
e13a5d7
 
4a473bb
e13a5d7
 
4a473bb
e13a5d7
 
 
 
 
4a473bb
e13a5d7
4a473bb
e13a5d7
 
 
4a473bb
e13a5d7
4a473bb
e13a5d7
d9d4a22
e13a5d7
4a473bb
e13a5d7
 
 
 
 
 
4a473bb
e13a5d7
4a473bb
e13a5d7
 
 
4a473bb
e13a5d7
 
4a473bb
e13a5d7
 
4a473bb
e13a5d7
 
 
 
 
4a473bb
e13a5d7
4a473bb
e13a5d7
 
 
4a473bb
147d685
4a473bb
e13a5d7
4a473bb
 
e13a5d7
4a473bb
e13a5d7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
base_model: google/gemma-2b
datasets:
- ravithejads/samvaad-hi-filtered
- Telugu-LLM-Labs/telugu_teknium_GPTeacher_general_instruct_filtered_romanized
- Telugu-LLM-Labs/telugu_alpaca_yahma_cleaned_filtered_romanized
- Telugu-LLM-Labs/sindhi_alpaca_yahma_cleaned_filtered
- Telugu-LLM-Labs/urdu_alpaca_yahma_cleaned_filtered
- Telugu-LLM-Labs/marathi_alpaca_yahma_cleaned_filtered
- Telugu-LLM-Labs/assamese_alpaca_yahma_cleaned_filtered
- Telugu-LLM-Labs/konkani_alpaca_yahma_cleaned_filtered
- Telugu-LLM-Labs/nepali_alpaca_yahma_cleaned_filtered
- abhinand/tamil-alpaca
- Tensoic/airoboros-3.2_kn
- Tensoic/gpt-teacher_kn
- VishnuPJ/Alpaca_Instruct_Malayalam
- Tensoic/Alpaca-Gujarati
- HydraIndicLM/punjabi_alpaca_52K
- HydraIndicLM/bengali_alpaca_dolly_67k
- OdiaGenAI/Odia_Alpaca_instructions_52k
- yahma/alpaca-cleaned
language:
- te
- en
- ta
- ml
- mr
- hi
- kn
- sd
- ne
- ur
- as
- gu
- bn
- pa
- or
library_name: transformers
pipeline_tag: text-generation
---

# Indic-gemma-2b-finetuned-sft-Navarasa-2.0

This model is based on [google/gemma-2b](https://huggingface.co/google/gemma-2b) and hase been LoRA finetuned on 15 Indian languages and English language instruction datasets:

1. #### Hindi - [ravithejads/samvaad-hi-filtered](https://huggingface.co/datasets/ravithejads/samvaad-hi-filtered), [HydraIndicLM/hindi_alpaca_dolly_67k](https://huggingface.co/datasets/HydraIndicLM/hindi_alpaca_dolly_67k)(sampled)
2. #### Telugu - [Telugu-LLM-Labs/telugu_alpaca_yahma_cleaned_filtered_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/telugu_alpaca_yahma_cleaned_filtered_romanized), [Telugu-LLM-Labs/telugu_teknium_GPTeacher_general_instruct_filtered_romanized](https://huggingface.co/datasets/Telugu-LLM-Labs/telugu_teknium_GPTeacher_general_instruct_filtered_romanized)
3. #### Marathi - [Telugu-LLM-Labs/sindhi_alpaca_yahma_cleaned_filtered](https://huggingface.co/datasets/Telugu-LLM-Labs/sindhi_alpaca_yahma_cleaned_filtered)
4. #### Urdu - [Telugu-LLM-Labs/urdu_alpaca_yahma_cleaned_filtered](https://huggingface.co/datasets/Telugu-LLM-Labs/urdu_alpaca_yahma_cleaned_filtered)
5. #### Assamese - [Telugu-LLM-Labs/assamese_alpaca_yahma_cleaned_filtered](https://huggingface.co/datasets/Telugu-LLM-Labs/assamese_alpaca_yahma_cleaned_filtered)
6. #### Konkani - [Telugu-LLM-Labs/konkani_alpaca_yahma_cleaned_filtered](https://huggingface.co/datasets/Telugu-LLM-Labs/konkani_alpaca_yahma_cleaned_filtered)
7. #### Nepali - [Telugu-LLM-Labs/nepali_alpaca_yahma_cleaned_filtered](https://huggingface.co/datasets/Telugu-LLM-Labs/nepali_alpaca_yahma_cleaned_filtered)
8. #### Sindhi - [Telugu-LLM-Labs/sindhi_alpaca_yahma_cleaned_filtered](https://huggingface.co/datasets/Telugu-LLM-Labs/sindhi_alpaca_yahma_cleaned_filtered)
9. #### Tamil - [abhinand/tamil-alpaca](https://huggingface.co/datasets/abhinand/tamil-alpaca)
10. #### Kannada - [Tensoic/airoboros-3.2_kn](https://huggingface.co/datasets/Tensoic/airoboros-3.2_kn), [Tensoic/gpt-teacher_kn](https://huggingface.co/datasets/Tensoic/gpt-teacher_kn)
11. #### Malayalam - [VishnuPJ/Alpaca_Instruct_Malayalam](https://huggingface.co/datasets/VishnuPJ/Alpaca_Instruct_Malayalam)
12. #### Gujarati - [Tensoic/Alpaca-Gujarati](https://huggingface.co/datasets/Tensoic/Alpaca-Gujarati)
13. #### Punjabi - [HydraIndicLM/punjabi_alpaca_52K](https://huggingface.co/datasets/HydraIndicLM/punjabi_alpaca_52K)
14. #### Bengali - [HydraIndicLM/bengali_alpaca_dolly_67k](https://huggingface.co/datasets/HydraIndicLM/bengali_alpaca_dolly_67k)(alpaca filtered)
15. #### Odia - [OdiaGenAI/Odia_Alpaca_instructions_52k](https://huggingface.co/datasets/OdiaGenAI/Odia_Alpaca_instructions_52k), [OdiaGenAI/gpt-teacher-roleplay-odia-3k](https://huggingface.co/datasets/OdiaGenAI/gpt-teacher-roleplay-odia-3k)
16. #### English - [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned)

The model is finetuned using [unsloth](https://github.com/unslothai/unsloth) library and we provide inference code using the same for faster inference. Alternatively you can use HuggingFace Library for inference.

# Training Details:

The model is trained on approx 650K instruction samples.
1. GPU: 1 A100, 80GB
2. Time: 45 Hours
3. Platform: [E2E Networks](https://www.e2enetworks.com/)
# Installation

`!pip install -U xformers --index-url https://download.pytorch.org/whl/cu121`
`!pip install "unsloth[kaggle-new] @git+https://github.com/unslothai/unsloth.git@nightly"`

# Input Text Format

```
### Instruction: {instruction}

### Input: {input}

## Response: {response}
```

# Inference With Unsloth

```python3
from unsloth import FastLanguageModel
import torch
max_seq_length = 2048
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = False 
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa-2.0",
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
    device_map="auto"
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference

input_prompt = """
### Instruction:
{}

### Input:
{}

### Response:
{}"""

input_text = input_prompt.format(
        "Tranlsate following sentence to Hindi.", # instruction
        "India is a great country.", # input
        "", # output - leave this blank for generation!
    )

inputs = tokenizer([input_text], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True)
response = tokenizer.batch_decode(outputs)
```

# Inference with HuggingFace

```python3
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model = AutoModelForCausalLM.from_pretrained(
    "Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa-2.0",
    load_in_4bit = False,
    token = hf_token
)
model.to("cuda")

tokenizer = AutoTokenizer.from_pretrained("Telugu-LLM-Labs/Indic-gemma-2b-finetuned-sft-Navarasa-2.0")

input_prompt = """
### Instruction:
{}

### Input:
{}

### Response:
{}"""

input_text = input_prompt.format(
        "Tranlsate following sentence to Hindi.", # instruction
        "India is a great country.", # input
        "", # output - leave this blank for generation!
    )

inputs = tokenizer([input_text], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 300, use_cache = True)
response = tokenizer.batch_decode(outputs)[0]
```

Refer to the [blog post](https://ravidesetty.medium.com/introducing-navarasa-2-0-indic-gemma-7b-2b-instruction-tuned-model-on-15-indian-languages-31f6565b2750) for sample examples.

Please check our [Code Repository](https://github.com/TeluguLLMLabs/Indic-gemma-7b-Navarasa) for training and inference scripts.


# Developers:

The model is a collaborative effort by [Ravi Theja](https://twitter.com/ravithejads) and [Ramsri Goutham](https://twitter.com/ramsri_goutham). Feel free to DM either of us if you have any questions.