🌟 Checkout Taiwan-LLM Demo Chat-UI 🌟
Model Card for Taiwan LLM 7B v2.0 base
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan. Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning. This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances. It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance. For detailed insights into Taiwan LLM's development and features, refer to our technical report.
Model description
- Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- Language(s) (NLP): Primarily Traditional Chinese (zh-tw)
- Finetuned from model: meta-llama/Llama-2-7b-hf
Model Sources
- Repository: https://github.com/MiuLab/Taiwan-LLaMa
- Demo: https://twllm.com/
Performance
Intended uses
You should fine-tuned this model for instruction-following / chat application.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5.0
Citation
If you find Taiwan LLM is useful in your work, please cite it with:
@misc{lin2023taiwan,
title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model},
author={Yen-Ting Lin and Yun-Nung Chen},
year={2023},
eprint={2311.17487},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Acknowledgement
Taiwan LLM v2 is conducted in collaboration with Ubitus K.K.. Ubitus provides valuable compute resources for the project.
- Downloads last month
- 728