Edit model card

PhoGPT: Generative Pre-training for Vietnamese

We open-source a state-of-the-art 4B-parameter generative model series for Vietnamese, which includes the base pre-trained monolingual model PhoGPT-4B and its chat variant, PhoGPT-4B-Chat. The base model, PhoGPT-4B, with exactly 3.7B parameters, is pre-trained from scratch on a Vietnamese corpus of 102B tokens, with an 8192 context length, employing a vocabulary of 20480 token types. The chat variant, PhoGPT-4B-Chat, is the modeling output obtained by fine-tuning PhoGPT-4B on a dataset of 70K instructional prompts and their responses, along with an additional 290K conversations. We demonstrate its superior performance compared to previous open-source models. More details about the general architecture and experimental results of PhoGPT can be found in our technical report:

@article{PhoGPT,
title     = {{PhoGPT: Generative Pre-training for Vietnamese}},
author    = {Dat Quoc Nguyen and Linh The Nguyen and Chi Tran and Dung Ngoc Nguyen and Dinh Phung and Hung Bui},
journal   = {arXiv preprint},
volume    = {arXiv:2311.02945},
year      = {2023}
}

Please CITE our technical report when PhoGPT is used to help produce published results or is incorporated into other software.

For further information or requests, please go to PhoGPT's homepage!

Downloads last month
173
GGUF
Model size
3.69B params
Architecture
mpt

4-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .