File size: 1,354 Bytes
7676b30 3172b63 7676b30 3172b63 7676b30 7547bc8 7676b30 23e52bb 3172b63 7676b30 7547bc8 7676b30 3172b63 7676b30 3172b63 7676b30 3172b63 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
---
license: apache-2.0
language:
- en
- th
pipeline_tag: text-generation
datasets:
- wannaphong/mark13
---
## NumFaLM 3B
NumFaLM 3B is a bilingual language model trained in Thai and English. The architecture model is Llama model that pretraining from scratch. It was built to open source AI and research for bilingual language models and improve small language models. We released the training script and train datasets so you can research the training and datasets.
- GitHub: [https://github.com/wannaphong/NumFaLM](https://github.com/wannaphong/NumFaLM)
- Training script: [https://github.com/wannaphong/EasyLM/tree/numfa_pretraining](https://github.com/wannaphong/EasyLM/tree/numfa_pretraining)
- Train Datasets: [wannaphong/mark13](https://huggingface.co/datasets/wannaphong/mark13)
We fork EasyLM and added training by HuggingFace datasets, but HuggingFace was down many times during the time we trained the model, so we can train just one epoch. The model trained one epoch.
# Acknowledgements
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). We use TPU4-64 for training model about 4 days / 1 epoch.
Thank you [TPU Research Cloud](https://sites.research.google/trc/about/) and [EasyLM project](https://github.com/young-geng/EasyLM)! We use EasyLM for pretraining model. |