RichardErkhov commited on
Commit
90d22e2
1 Parent(s): a819102

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Faro-Qwen-1.8B - AWQ
11
+ - Model creator: https://huggingface.co/wenbopan/
12
+ - Original model: https://huggingface.co/wenbopan/Faro-Qwen-1.8B/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: mit
20
+ datasets:
21
+ - wenbopan/Fusang-v1
22
+ - wenbopan/OpenOrca-zh-20k
23
+ language:
24
+ - zh
25
+ - en
26
+ pipeline_tag: text-generation
27
+ ---
28
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp)
29
+
30
+ **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.**
31
+
32
+ # Faro-Qwen-1.8B
33
+ Faro-Qwen-1.8B is an improved [Qwen/Qwen1.5-4B-Chat](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Qwen1.5-4B-Chat, Faro-Qwen-1.8B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1.
34
+
35
+ Faro-Qwen-1.8B uses dynamic NTK and continual training to extend its max context length to 100K. However, due to the lack of Dynamic NTK supports for`Qwen2ForCausalLM` in vLLM, inference on text longer than 32K requires using native `transformers` implementations.
36
+
37
+ ## How to Use
38
+
39
+ Faro-Qwen-1.8B uses chatml template. I recommend using vLLM for long inputs.
40
+
41
+ ```python
42
+ import io
43
+ import requests
44
+ from PyPDF2 import PdfReader
45
+ from vllm import LLM, SamplingParams
46
+ llm = LLM(model="wenbopan/Faro-Qwen-1.8B")
47
+ pdf_data = io.BytesIO(requests.get("https://arxiv.org/pdf/2303.08774.pdf").content)
48
+ document = "".join(page.extract_text() for page in PdfReader(pdf_data).pages) # 100 pages
49
+ question = f"{document}\n\nAccording to the paper, what is the parameter count of GPT-4?"
50
+ messages = [ {"role": "user", "content": question} ] # 83K tokens
51
+ prompt = llm.get_tokenizer().apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
52
+ output = llm.generate(prompt, SamplingParams(temperature=0.8, max_tokens=500))
53
+ print(output[0].outputs[0].text)
54
+ ```
55
+
56
+ <details> <summary>Or With Transformers</summary>
57
+
58
+ ```python
59
+ from transformers import AutoModelForCausalLM, AutoTokenizer
60
+ model = AutoModelForCausalLM.from_pretrained('wenbopan/Faro-Qwen-1.8B', device_map="cuda")
61
+ tokenizer = AutoTokenizer.from_pretrained('wenbopan/Faro-Qwen-1.8B')
62
+ messages = [
63
+ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."},
64
+ {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."}
65
+ ]
66
+ input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
67
+ generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5)
68
+ response = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
69
+ ```
70
+
71
+ </details>
72
+
73
+ For more info please refer to [wenbopan/Faro-Yi-9B](https://huggingface.co/wenbopan/Faro-Yi-9B)
74
+