File size: 1,555 Bytes
93861f8 1836f3f 93861f8 f1104a9 93861f8 ce367df 93861f8 ae808f0 93861f8 28effb3 93861f8 ce367df 93861f8 ce367df 93861f8 ce367df 93861f8 ce367df 93861f8 ce367df 93861f8 ce367df 93861f8 ce367df 93861f8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
license: apache-2.0
language:
- en
- th
library_name: transformers
datasets:
- wannaphong/KhanomTanLLM-pretrained-dataset
---
# KhanomTan LLM (3B)
KhanomTan LLM is a bilingual language model trained in Thai and English from open source dataset by PyThaiNLP. We train the model from public dataset only. We public the dataset, source code, and model.
Repository: https://github.com/pythainlp/KhanomTanLLM
Codename: numfa-v2
## Model Details
### Model Description
The model was trained by [easylm](https://github.com/young-geng/EasyLM)
## Acknowledgements
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). We use TPU4-64 for training model about 8 days.
Thank you [TPU Research Cloud](https://sites.research.google/trc/about/) and [EasyLM project](https://github.com/young-geng/EasyLM)! We use EasyLM for pretraining model.
## How to Get Started with the Model
Use the code below to get started with the model.
**Example**
```python
# !pip install accelerate sentencepiece transformers bitsandbytes
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="numfa/numfa_v2-3b", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
outputs = pipe("test is", max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.)
print(outputs[0]["generated_text"])
``` |