Gemma 2 Baku 2B (rinna/gemma-2-baku-2b)

rinna-icon

Overview

We conduct continual pre-training of google/gemma-2-2b on 80B tokens from a mixture of Japanese and English datasets. The continual pre-training improves the model's performance on Japanese tasks.

The name baku comes from the Japanese word 獏/ばく/Baku, which is a kind of Japanese mythical creature (妖怪/ようかい/Youkai).

Size Continual Pre-Training Instruction-Tuning
2B Gemma 2 Baku 2B [HF] Gemma 2 Baku 2B Instruct [HF]

Benchmarking

Please refer to rinna's LM benchmark page.


How to use the model

import transformers
import torch

model_id = "rinna/gemma-2-baku-2b"
pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16, "attn_implementation": "eager"},
    device_map="auto"
)
output = pipeline(
    "西田幾多郎は、",
    max_new_tokens=256,
    do_sample=True
)
print(output[0]["generated_text"])

It is recommended to use eager attention when conducting batch inference under bfloat16 precision. Currently, Gemma 2 yields NaN values for input sequences with padding when the default attention mechanism (torch.scaled_dot_product_attention) is employed in conjunction with bfloat16.


Tokenization

The model uses the original google/gemma-2-2b tokenizer.


How to cite

@misc{rinna-gemma-2-baku-2b,
    title = {rinna/gemma-2-baku-2b},
    author = {Wakatsuki, Toshiaki and Chen, Xinqi and Sawada, Kei},
    url = {https://huggingface.co/rinna/gemma-2-baku-2b}
}

@inproceedings{sawada2024release,
    title = {Release of Pre-Trained Models for the {J}apanese Language},
    author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
    booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
    month = {5},
    year = {2024},
    pages = {13898--13905},
    url = {https://aclanthology.org/2024.lrec-main.1213},
    note = {\url{https://arxiv.org/abs/2404.01657}}
}

References

@article{gemma-2-2024,
    title = {Gemma 2},
    url = {https://www.kaggle.com/models/google/gemma-2},
    publisher = {Kaggle},
    author = {Gemma Team},
    year = {2024}
}

@misc{litgpt-2023,
    author = {Lightning AI},
    title = {LitGPT},
    howpublished = {\url{https://github.com/Lightning-AI/litgpt}},
    year = {2023}
}

License

Gemma Terms of Use

Downloads last month
3,696
Safetensors
Model size
2.61B params
Tensor type
F32
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for rinna/gemma-2-baku-2b

Base model

google/gemma-2-2b
Finetuned
(476)
this model
Adapters
3 models
Finetunes
1 model
Merges
7 models
Quantizations
2 models

Datasets used to train rinna/gemma-2-baku-2b

Space using rinna/gemma-2-baku-2b 1

Collections including rinna/gemma-2-baku-2b