File size: 17,611 Bytes
ca19cec |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 |
---
base_model:
- GoToCompany/gemma2-9b-cpt-sahabatai-v1-base
language:
- en
- id
- jv
- su
license: gemma
---
# Gemma2 9B CPT Sahabat-AI v1 Instruct
**Sahabat-AI** (Indonesian language for “close friends”) is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for Indonesian language and its various dialects. Sahabat-AI ecosystem is co-initiated by Indonesian tech and telecommunication companies: GoTo Group and Indosat Ooredoo Hutchison.
Gemma2 9B CPT Sahabat-AI v1 Instruct is an Indonesian-focused model which has been fine-tuned with around **448,000 Indonesian instruction-completion pairs** alongside an Indonesian-dialect pool consisting of **96,000 instruction-completion pairs in Javanese** and **98,000 instruction-completion pairs in Sundanese**. Additionally, we added a pool of **129,000 instruction-completion pairs in English**.
- **Co-initiated by:** PT GoTo Gojek Tokopedia Tbk, Indosat Ooredoo Hutchison
- **Developed by:** PT GoTo Gojek Tokopedia Tbk, AI Singapore
- **Model type:** Decoder
- **Languages:** English, Indonesian, Javanese, Sundanese
- **License:** [Gemma Community License](https://ai.google.dev/gemma/terms)
## Model Details
### Model Description
We performed instruction tuning in Indonesian, Javanese, Sundanese as well as English on our [continued pre-trained Gemma2 9B CPT Sahabat-AI v1](https://huggingface.co/GoToCompany/gemma2-9b-cpt-sahabatai-v1-base), a decoder model using the Gemma2 architecture, to create Gemma2 9B CPT Sahabat-AI v1 Instruct.
For tokenisation, the model employs the default tokenizer used in Gemma-2-9B. The model has a context length of 8192.
### Benchmark Performance
We evaluated Gemma2 9B CPT Sahabat-AI V1 Instruct on both general language capabilities and instruction-following capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities, we employed the
- [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
- These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
- We also added support for Javanese and Sundanese for the BHASA tasks whenever applicable
- [IndoMMLU](https://arxiv.org/pdf/2310.04928)
- These tasks include examination questions on Humanities, Indonesian language, Local languages and cultures, Social science and STEM across primary, middle, and high school levels.
- and the common English tasks from the [HuggingFace LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard).
- These tasks consist of [IFEval, BBH, Math Lvl 5, GPQA, MuSR, and MMLU-PRO.](https://huggingface.co/docs/leaderboards/open_llm_leaderboard/about)
- **Caveat**: Our results differ from the HuggingFace LLM Leaderboard because we have used [VLLM](https://docs.vllm.ai/en/latest/) as our inference platform. VLLM caps the context size at **4096 tokens** while HuggingFace was set to **8192 tokens**.
Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
The evaluation was done **zero-shot** with native prompts on a sample of 100-1000 instances for each dataset.
#### Instruction-following Capabilities
Since Gemma2 9B CPT Sahabat-AI v1 Instruct is an instruction-following model, we also evaluated it on instruction-following capabilities with the [IFEval](https://arxiv.org/abs/2311.07911) dataset.
As this dataset was in English, the linguists and native speakers in the team worked together to filter, localize and translate the dataset into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.
**IFEval**
IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalized by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).
*Note*: IFEval was only used on Bahasa Indonesia. We are currently working on adding it for Javanese and Sundanese for our upcoming releases.
#### Results
#### Indonesian Results
#### SEA HELM (also known as BHASA)
<table style="border-collapse: collapse; width: 100%; font-size: 10px">
<tr>
<th style="border: 2px solid black; padding: 8px; font-weight: bold;">Language / Model Name [Instruct]</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2.5-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3-8B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv2.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">gemma-2-9B</th>
<th style="border: 1px solid gray; padding: 8px;">sahabatai-v1-8B</th>
<th style="border: 2px solid black; padding: 8px;">sahabatai-v1-9B</th>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Overall (Bahasa Indonesia + Javanese + Sundanese)</td>
<td style="border: 1px solid gray; padding: 8px;">36.963</td>
<td style="border: 1px solid gray; padding: 8px;">42.988</td>
<td style="border: 1px solid gray; padding: 8px;">37.805</td>
<td style="border: 1px solid gray; padding: 8px;">45.866</td>
<td style="border: 1px solid gray; padding: 8px;">46.880</td>
<td style="border: 1px solid gray; padding: 8px;">56.359</td>
<td style="border: 1px solid gray; padding: 8px;">53.725</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">61.169</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Bahasa Indonesia</td>
<td style="border: 1px solid gray; padding: 8px;">46.760</td>
<td style="border: 1px solid gray; padding: 8px;">60.372</td>
<td style="border: 1px solid gray; padding: 8px;">42.022</td>
<td style="border: 1px solid gray; padding: 8px;">51.944</td>
<td style="border: 1px solid gray; padding: 8px;">54.579</td>
<td style="border: 1px solid gray; padding: 8px;">63.394</td>
<td style="border: 1px solid gray; padding: 8px;">57.221</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">64.154</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Javanese</td>
<td style="border: 1px solid gray; padding: 8px;">33.956</td>
<td style="border: 1px solid gray; padding: 8px;">40.625</td>
<td style="border: 1px solid gray; padding: 8px;">41.739</td>
<td style="border: 1px solid gray; padding: 8px;">47.587</td>
<td style="border: 1px solid gray; padding: 8px;">48.012</td>
<td style="border: 1px solid gray; padding: 8px;">56.468</td>
<td style="border: 1px solid gray; padding: 8px;">56.460</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">64.439</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Sundanese</td>
<td style="border: 1px solid gray; padding: 8px;">30.173</td>
<td style="border: 1px solid gray; padding: 8px;">27.969</td>
<td style="border: 1px solid gray; padding: 8px;">29.654</td>
<td style="border: 1px solid gray; padding: 8px;">38.068</td>
<td style="border: 1px solid gray; padding: 8px;">38.050</td>
<td style="border: 1px solid gray; padding: 8px;">49.216</td>
<td style="border: 1px solid gray; padding: 8px;">47.495</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">54.913</td>
</tr>
</table>
#### IndoMMLU
<table style="border-collapse: collapse; width: 100%; font-size: 10px">
<tr>
<th style="border: 2px solid black; padding: 8px; font-weight: bold;">Model Name [Instruct]</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2.5-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Meta-Llama-3-8B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv2.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">gemma-2-9B</th>
<th style="border: 1px solid gray; padding: 8px;">sahabatai-v1-8B</th>
<th style="border: 2px solid black; padding: 8px;">sahabatai-v1-9B</th>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Overall Results</td>
<td style="border: 1px solid gray; padding: 8px;">53.0%</td>
<td style="border: 1px solid gray; padding: 8px;">56.0%</td>
<td style="border: 1px solid gray; padding: 8px;">51.9%</td>
<td style="border: 1px solid gray; padding: 8px;">53.8%</td>
<td style="border: 1px solid gray; padding: 8px;">54.4%</td>
<td style="border: 1px solid gray; padding: 8px;">61.4%</td>
<td style="border: 1px solid gray; padding: 8px;">55.6%</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">62.6%</td>
</tr>
</table>
#### English Results
<table style="border-collapse: collapse; width: 100%; font-size: 10px">
<tr>
<th style="border: 2px solid black; padding: 8px;">Model Name [Instruct]</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2.5-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3-8B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv2.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">gemma-2-9B</th>
<th style="border: 1px solid gray; padding: 8px;">sahabatai-v1-8B</th>
<th style="border: 2px solid black; padding: 8px;">sahabatai-v1-9B</th>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Average</td>
<td style="border: 1px solid gray; padding: 8px;">24.48</td>
<td style="border: 1px solid gray; padding: 8px;">27.75</td>
<td style="border: 1px solid gray; padding: 8px;">23.91</td>
<td style="border: 1px solid gray; padding: 8px;">27.98</td>
<td style="border: 1px solid gray; padding: 8px;">24.52</td>
<td style="border: 1px solid gray; padding: 8px;">26.44</td>
<td style="border: 1px solid gray; padding: 8px;">24.43</td>
<td style="border: 1px solid black; padding: 8px; background-color: lightgreen;">33.67</td>
</tr>
</table>
Gemma2 9B CPT Sahabat-AI v1 Instruct can be run using the 🤗 Transformers library
```python
# Please use transformers==4.45.0
import torch
import transformers
model_id = "GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
# Javanese
messages = [
{"role": "user", "content": "Sopo wae sing ana ing Punakawan?"}
]
outputs = pipeline(
messages,
max_new_tokens=256,
eos_token_id=terminators,
)
print(outputs[0]["generated_text"][-1])
# Sundanese
messages = [
{"role": "user", "content": "Kumaha caritana si Kabayan?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
eos_token_id=terminators,
)
print(outputs[0]["generated_text"][-1])
```
### Caveats
It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.
## Limitations
### Safety
Current Sahabat-AI models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## Technical Specifications
### Fine-Tuning Details
Gemma2 9B CPT Sahabat-AI v1 Instruct was built using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 4 hours, with alignment taking 2 hours, both on 8x H100-80GB GPUs.
## Data
Gemma2 9B CPT Sahabat-AI v1 Instruct was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
## Call for Collaboration
Sahabat-AI (Indonesian language for “close friends”) a **local open source Large Language Model (LLM) ecosystem in Indonesian language**, co-initiated by Indonesian tech and telecommunication companies: GoTo Group and Indosat Ooredoo Hutchison.
Sahabat-AI ecosystem aims to empower Indonesians who want to develop AI-based services and applications using Bahasa Indonesia and its various local dialects.
We are supported by research centers and global tech experts such as AI Singapore and Tech Mahendra to train the model to gain general language understanding.
We also collaborate with key top Indonesia universities such as University of Indonesia, Gadjah Mada University, Bogor Institute of Agriculture, Bandung Institute of Technology, including top Indonesia media groups, such as Kompas Gramedia Group and Republika to train and enrich the model in Bahasa Indonesia, ensuring optimum provision of local context and cultural relevance.
We would like to invite **researchers, developers, and language enthusiasts** to actively contribute to the enhancement and expansion of Sahabat-AI.
Your collaborations can involve:
- Identifying and reporting technical issues
- Sharing pre-training, instruction, and preference data
- Improving documentation usability
- Proposing and implementing new model evaluation tasks and metrics
Join us in shaping the future of Sahabat-AI by sharing your expertise and insights to make these models more accessible, accurate, and versatile.
You can contribute your ideas through [this form.](https://docs.google.com/forms/d/1_us969eQtEooYOn4XkvGkdP5VHOyCbO6L_sd9kTMnaA/edit)
## The Development Team (in ascending alphabetical order)
### AI Singapore
Chan Adwin<br>
Cheng Nicholas<br>
Choa Esther<br>
Huang Yuli<br>
Lau Wayne<br>
Lee Chwan Ren<br>
Leong Wai Yi<br>
Leong Wei Qi<br>
Limkonchotiwat Peerat<br>
Liu Bing Jie Darius<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Brandon<br>
Ong Tat-Wee David<br>
Ong Zhi Hao<br>
Rengarajan Hamsawardhini<br>
Siow Bryan<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teng Walter<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
### PT GoTo Gojek Tokopedia Tbk
Anissa Dininta<br>
Chau Shiau Ching<br>
Choiri Hendra Hadhil<br>
Goel Priyank<br>
Saini Ajay Kumar<br>
Shalev Ofir<br>
Tan Daryl<br>
Tep Kilian Rithi<br>
Tiwari Anupam<br>
Widjojo Daniel<br>
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [Sahabat-AI Inquiry Form.](https://docs.google.com/forms/d/1_us969eQtEooYOn4XkvGkdP5VHOyCbO6L_sd9kTMnaA/edit)
## Disclaimer
This is the repository for the Instruct model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## References
### IndoMMLU Reference
```bibtex
@inproceedings{koto-etal-2023-indommlu,
title = "Large Language Models Only Pass Primary School Exams in {I}ndonesia: A Comprehensive Test on {I}ndo{MMLU}",
author = "Fajri Koto and Nurul Aisyah and Haonan Li and Timothy Baldwin",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = December,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
}
}
``` |