wannaphong commited on
Commit
ce367df
1 Parent(s): 93861f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -18
README.md CHANGED
@@ -4,36 +4,28 @@ language:
4
  - en
5
  library_name: transformers
6
  ---
7
- # NumFa v2 (3B)
8
 
9
- NumFa v2 3B is a LLM pretrained that has 1B.
10
 
11
- Base model: TinyLLama
12
 
13
- **For testing only**
14
 
15
- ## Model Details
16
-
17
- ### Model Description
18
 
19
- The model was trained by TPU.
20
 
21
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
22
-
23
- - **Developed by:** NumFa
24
- - **Model type:** text-generation
25
- - **Language(s) (NLP):** English
26
- - **License:** apache-2.0
27
 
 
28
 
29
- ### Out-of-Scope Use
30
 
31
- Math, Coding, and other language
32
 
 
33
 
34
- ## Bias, Risks, and Limitations
35
 
36
- The model can has a bias from dataset. Use at your own risks!
37
 
38
  ## How to Get Started with the Model
39
 
 
4
  - en
5
  library_name: transformers
6
  ---
7
+ # KhanomTan LLM (3B)
8
 
9
+ KhanomTan LLM is a Thai pretrained LLM from scratch from open source dataset by PyThaiNLP. We train the model from public dataset only. We public the dataset, source code, and model.
10
 
 
11
 
12
+ Repository: https://github.com/wannaphong/KhanomTanLLM
13
 
 
 
 
14
 
15
+ Codename: numfa-v2
16
 
17
+ ## Model Details
 
 
 
 
 
18
 
19
+ ### Model Description
20
 
21
+ The model was trained by [easylm](https://github.com/young-geng/EasyLM)
22
 
23
+ ## Acknowledgements
24
 
25
+ Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC). We use TPU4-64 for training model about 8 days.
26
 
27
+ Thank you [TPU Research Cloud](https://sites.research.google/trc/about/) and [EasyLM project](https://github.com/young-geng/EasyLM)! We use EasyLM for pretraining model.
28
 
 
29
 
30
  ## How to Get Started with the Model
31