Edit model card

Gemma2 9B CPT SEA-LIONv3 Instruct

SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.

Gemma2 9B CPT SEA-LIONv3 Instruct is a multilingual model which has been fine-tuned with around 500,000 English instruction-completion pairs alongside a larger pool of around 1,000,000 instruction-completion pairs from other ASEAN languages, such as Indonesian, Thai and Vietnamese.

SEA-LION stands for Southeast Asian Languages In One Network.

  • Developed by: Products Pillar, AI Singapore
  • Funded by: Singapore NRF
  • Model type: Decoder
  • Languages: English, Chinese, Vietnamese, Indonesian, Thai, Filipino, Tamil, Malay, Khmer, Lao, Burmese, Javanese, Sundanese
  • License: Gemma Community License

Model Details

Model Description

We performed instruction tuning in English and also in ASEAN languages such as Indonesian, Thai and Vietnamese on our continued pre-trained Gemma2 9B CPT SEA-LIONv3, a decoder model using the Gemma2 architecture, to create Gemma2 9B CPT SEA-LIONv3 Instruct.

For tokenisation, the model employs the default tokenizer used in Gemma-2-9B. The model has a context length of 8192.

Benchmark Performance

We evaluated Gemma2 9B CPT SEA-LIONv3 Instruct on both general language capabilities and instruction-following capabilities.

General Language Capabilities

For the evaluation of general language capabilities, we employed the SEA HELM (also known as BHASA) evaluation benchmark across a variety of tasks. These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).

Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.

The evaluation was done zero-shot with native prompts on a sample of 100-1000 instances for each dataset.

Instruction-following Capabilities

Since Gemma2 9B CPT SEA-LIONv3 Instruct is an instruction-following model, we also evaluated it on instruction-following capabilities with two datasets, IFEval and MT-Bench.

As these two datasets were originally in English, the linguists and native speakers in the team worked together to filter, localize and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.

IFEval

IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalized by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).

MT-Bench

MT-Bench evaluates a model's ability to engage in multi-turn (2 turns) conversations and respond in ways that align with human needs. We use gpt-4-1106-preview as the judge model and compare against gpt-3.5-turbo-0125 as the baseline model. The metric used is the weighted win rate against the baseline model (i.e. average win rate across each category: Math, Reasoning, STEM, Humanities, Roleplay, Writing, Extraction). A tie is given a score of 0.5.

For more details on Gemma2 9B CPT SEA-LIONv3 Instruct benchmark performance, please refer to the SEA HELM leaderboard, https://leaderboard.sea-lion.ai/

Usage

Gemma2 9B CPT SEA-LIONv3 Instruct can be run using the 🤗 Transformers library

# Please use transformers==4.45.2

import transformers
import torch

model_id = "aisingapore/gemma2-9b-cpt-sea-lionv3-instruct"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)
messages = [
    {"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Caveats

It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.

Limitations

Safety

Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.

Technical Specifications

Fine-Tuning Details

Gemma2 9B CPT SEA-LIONv3 Instruct was built using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 15 hours, with alignment taking 2 hours, both on 8x H100-80GB GPUs.

Data

Gemma2 9B CPT SEA-LIONv3 Instruct was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.

Indonesian, Javanese & Sudanese Specific SEA-LION

Our partners at GoTo have continued pretrained and instruction tuned a variant of Gemma2 9B CPT SEA-LIONv3, specifically enhancing its capabilities for Indonesian, Javanese, and Sundanese languages. Find the continued pretrained model at Gemma2 9B CPT SahabatAIv1 Base, and its corresponding instructioned tuned version at Gemma2 9B CPT SahabatAIv1 Instruct.

Call for Contributions

We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.

The Team

Chan Adwin, Choa Esther, Cheng Nicholas, Huang Yuli, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Teng Walter, Yeo Yeow Tong, Yong Xianbin

Acknowledgements

AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.

Contact

For more info, please contact us using this SEA-LION Inquiry Form

Link to SEA-LION's GitHub repository

Disclaimer

This is the repository for the commercial instruction-tuned model. The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.

Downloads last month
4,531
Safetensors
Model size
9.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for aisingapore/gemma2-9b-cpt-sea-lionv3-instruct

Base model

google/gemma-2-9b
Finetuned
(2)
this model
Merges
1 model
Quantizations
2 models

Space using aisingapore/gemma2-9b-cpt-sea-lionv3-instruct 1

Collection including aisingapore/gemma2-9b-cpt-sea-lionv3-instruct