Edit model card
chibi img

Preface

Small parameter LLMs are ideal for navigating the complexities of the Japanese language, which involves multiple character systems like kanji, hiragana, and katakana, along with subtle social cues. Despite their smaller size, these models are capable of delivering highly accurate and context-aware results, making them perfect for use in environments where resources are constrained. Whether deployed on mobile devices with limited processing power or in edge computing scenarios where fast, real-time responses are needed, these models strike the perfect balance between performance and efficiency, without sacrificing quality or speed.

Llama 3.2 Chibi 3B

This experimental model is the result of continuous pre-training of Meta's Llama 3.2 3B on a small mixture of Japanese datasets. It is not fine-tuned for chat or dialogue-based tasks. The model has been pre-trained for general language modeling purposes and may require additional fine-tuning for specific applications, such as conversational agents or other downstream tasks. Users interested in deploying this model for interactive environments should consider further fine-tuning with appropriate datasets.

Architecture

Llama 3.2 3B

Training

The model has been trained with the following mixture of datasets:

Contributors

How to use

Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function. Make sure to update your transformers installation via pip install --upgrade transformers.

import torch
from transformers import pipeline

model_id = "AELLM/Llama-3.2-Chibi-3B"

pipe = pipeline(
    "text-generation", 
    model=model_id, 
    torch_dtype=torch.bfloat16, 
    device_map="auto"
)

pipe("人生の鍵は")

License

Refer to Llama 3.2 Community License

References

@inproceedings{zheng2024llamafactory,
  title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
  author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},
  booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
  address={Bangkok, Thailand},
  publisher={Association for Computational Linguistics},
  year={2024},
  url={http://arxiv.org/abs/2403.13372}
}
Downloads last month
181
Safetensors
Model size
3.21B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AELLM/Llama-3.2-Chibi-3B

Finetuned
(41)
this model
Quantizations
3 models

Datasets used to train AELLM/Llama-3.2-Chibi-3B