wannaphong commited on
Commit
93861f8
1 Parent(s): 03434af

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ ---
7
+ # NumFa v2 (3B)
8
+
9
+ NumFa v2 3B is a LLM pretrained that has 1B.
10
+
11
+ Base model: TinyLLama
12
+
13
+ **For testing only**
14
+
15
+ ## Model Details
16
+
17
+ ### Model Description
18
+
19
+ The model was trained by TPU.
20
+
21
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
22
+
23
+ - **Developed by:** NumFa
24
+ - **Model type:** text-generation
25
+ - **Language(s) (NLP):** English
26
+ - **License:** apache-2.0
27
+
28
+
29
+ ### Out-of-Scope Use
30
+
31
+ Math, Coding, and other language
32
+
33
+
34
+ ## Bias, Risks, and Limitations
35
+
36
+ The model can has a bias from dataset. Use at your own risks!
37
+
38
+ ## How to Get Started with the Model
39
+
40
+ Use the code below to get started with the model.
41
+
42
+ **Example**
43
+
44
+ ```python
45
+ # !pip install accelerate sentencepiece transformers bitsandbytes
46
+ import torch
47
+ from transformers import pipeline
48
+
49
+ pipe = pipeline("text-generation", model="numfa/numfa_v2-3b", torch_dtype=torch.bfloat16, device_map="auto")
50
+
51
+ # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
52
+
53
+ outputs = pipe("test is", max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.)
54
+ print(outputs[0]["generated_text"])
55
+ ```