LLaMAX: Scaling Linguistic Horizons of LLM by Enhancing Translation Capabilities Beyond 100 Languages
Abstract
Large Language Models~(LLMs) demonstrate remarkable translation capabilities in high-resource language tasks, yet their performance in low-resource languages is hindered by insufficient multilingual data during pre-training. To address this, we dedicate 35,000 A100-SXM4-80GB GPU hours in conducting extensive multilingual continual pre-training on the LLaMA series models, enabling translation support across more than 100 languages. Through a comprehensive analysis of training strategies, such as vocabulary expansion and data augmentation, we develop LLaMAX. Remarkably, without sacrificing its generalization ability, LLaMAX achieves significantly higher translation performance compared to existing open-source LLMs~(by more than 10 spBLEU points) and performs on-par with specialized translation model~(M2M-100-12B) on the Flores-101 benchmark. Extensive experiments indicate that LLaMAX can serve as a robust multilingual foundation model. The code~\url{https://github.com/CONE-MT/LLaMAX/.} and models~\url{https://huggingface.co/LLaMAX/.} are publicly available.
Community
LLaMAX is a powerful language model created specifically for multilingual scenarios. Built upon Meta's LLaMA series models, LLaMAX undergoes extensive training across more than 100 languages. Remarkably, it enhances its multilingual capabilities without compromising its generalization ability, surpassing existing LLMs.
Highlights:
LlaMAX with enhanced translation performance across all 101 languages covered by Flores-101.
LLaMAX benefits for unseen long-tail low-resource languages as well by evaluating its performance on Flores-200.
LLaMAX provides a better starting point for multilingual tasks, as demonstrated by >5% accuracy improvements after fine-tuning with task-specific data.
LLaMAX also provides a lot of analysis on the multilingual continual pre-training.
More Details:
Models citing this paper 12
Browse 12 models citing this paperDatasets citing this paper 0
No dataset linking this paper