wenbopan commited on
Commit
3829a5a
1 Parent(s): b88beda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md CHANGED
@@ -1,3 +1,46 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - wenbopan/Fusang-v1
5
+ - wenbopan/OpenOrca-zh-20k
6
+ language:
7
+ - zh
8
+ - en
9
  ---
10
+
11
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/s21sMRxRT56c5t4M15GBP.webp)
12
+
13
+ **The Faro chat model focuses on practicality and long-context modeling. It handles various downstream tasks with higher quality, delivering stable and reliable results even when inputs contain lengthy documents or complex instructions. Faro seamlessly works in both English and Chinese.**
14
+
15
+ # Faro-Yi-34B
16
+ Faro-Yi-34B is an improved [Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-34B-200K, Faro-Yi-34B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1.
17
+
18
+ ## How to Use
19
+
20
+ Faro-Yi-34B uses chatml template. This make it easy to set up system prompt and multi-turn conversations.
21
+
22
+ ```python
23
+ from transformers import AutoModelForCausalLM, AutoTokenizer
24
+
25
+ model = AutoModelForCausalLM.from_pretrained(
26
+ model_path,
27
+ device_map="cuda"
28
+ )
29
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
30
+
31
+ messages = [
32
+ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."},
33
+ {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."}
34
+ ]
35
+ input_ids = tokenizer.apply_chat_template(
36
+ messages,
37
+ tokenize=True,
38
+ add_generation_prompt=True,
39
+ return_tensors="pt",
40
+ ).to(model.device)
41
+
42
+ generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5)
43
+ response = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
44
+
45
+ # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. It's like this: if you have a triangle with two sides, you can find the length of the third side by squaring the two sides and then adding them together. The square root of that sum will give you the length of the third side! It's useful for sailing and navigating, so you always know how far you've traveled. Remember, it's all about the sum of squares, me hearties!
46
+ ```