wannaphong
commited on
Commit
•
3172b63
1
Parent(s):
7676b30
Update README.md
Browse files
README.md
CHANGED
@@ -2,57 +2,25 @@
|
|
2 |
license: apache-2.0
|
3 |
language:
|
4 |
- en
|
|
|
5 |
pipeline_tag: text-generation
|
|
|
|
|
6 |
---
|
7 |
|
8 |
-
#
|
9 |
|
10 |
-
|
11 |
|
12 |
-
Base model: openllama3b
|
13 |
|
14 |
-
|
|
|
15 |
|
16 |
-
## Model Details
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
|
24 |
-
-
|
25 |
-
- **Model type:** text-generation
|
26 |
-
- **Language(s) (NLP):** English
|
27 |
-
- **License:** apache-2.0
|
28 |
-
|
29 |
-
|
30 |
-
### Out-of-Scope Use
|
31 |
-
|
32 |
-
Math, Coding, and other language
|
33 |
-
|
34 |
-
|
35 |
-
## Bias, Risks, and Limitations
|
36 |
-
|
37 |
-
The model can has a bias from dataset. Use at your own risks!
|
38 |
-
|
39 |
-
## How to Get Started with the Model
|
40 |
-
|
41 |
-
Use the code below to get started with the model.
|
42 |
-
|
43 |
-
**Example**
|
44 |
-
|
45 |
-
1.
|
46 |
-
|
47 |
-
```python
|
48 |
-
# !pip install accelerate sentencepiece transformers bitsandbytes
|
49 |
-
import torch
|
50 |
-
from transformers import pipeline
|
51 |
-
|
52 |
-
pipe = pipeline("text-generation", model="numfa_3b_1epoch", torch_dtype=torch.bfloat16, device_map="auto")
|
53 |
-
|
54 |
-
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
|
55 |
-
|
56 |
-
outputs = pipe("test is", max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.)
|
57 |
-
print(outputs[0]["generated_text"])
|
58 |
-
```
|
|
|
2 |
license: apache-2.0
|
3 |
language:
|
4 |
- en
|
5 |
+
- th
|
6 |
pipeline_tag: text-generation
|
7 |
+
datasets:
|
8 |
+
- wannaphong/mark13
|
9 |
---
|
10 |
|
11 |
+
# NumFaLM 3B
|
12 |
|
13 |
+
NumFaLM 3B is a bilingual language model trained on Thai and English. The architecture model is Llama model that pretraining from scratch. It was build to open source AI and research for bilingual language model and improve small language model. We released the training srcipt and train datasets to you can research the training and datasets.
|
14 |
|
|
|
15 |
|
16 |
+
- Training script: [https://github.com/wannaphong/EasyLM/tree/numfa_pretraining](https://github.com/wannaphong/EasyLM/tree/numfa_pretraining)
|
17 |
+
- Train Datasets: [wannaphong/mark13](https://huggingface.co/datasets/wannaphong/mark13)
|
18 |
|
|
|
19 |
|
20 |
+
We fork EasyLM and add training by HuggingFace datasets but HuggingFace was down many time in during the time we train the model, so we can trained just 1 epoch. The model trained 1 epoch.
|
21 |
|
22 |
+
# Acknowledgements
|
23 |
|
24 |
+
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). We use TPU4-64 for training model about 4 days / 1 epoch.
|
25 |
|
26 |
+
Thank you [TPU Research Cloud](https://sites.research.google/trc/about/) and [EasyLM project](https://github.com/young-geng/EasyLM)! We use EasyLM for pretraining model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|