MaziyarPanahi commited on
Commit
ce988b7
1 Parent(s): 41e8485

Create README.md (#3)

Browse files

- Create README.md (b4f0fad74828201036df14faff0453928217c493)

Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: tongyi-qianwen
4
+ license_link: https://huggingface.co/Qwen/Qwen2-7B/blob/main/LICENSE
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - chat
10
+ - qwen
11
+ - qwen2
12
+ - finetune
13
+ - chatml
14
+ - OpenHermes-2.5
15
+ - HelpSteer2
16
+ - Orca
17
+ - SlimOrca
18
+ library_name: transformers
19
+ inference: false
20
+ model_creator: MaziyarPanahi
21
+ quantized_by: MaziyarPanahi
22
+ base_model: Qwen/Qwen2-7B
23
+ model_name: Qwen2-7B-Instruct-v0.8
24
+ datasets:
25
+ - nvidia/HelpSteer2
26
+ - teknium/OpenHermes-2.5
27
+ - microsoft/orca-math-word-problems-200k
28
+ - Open-Orca/SlimOrca
29
+ ---
30
+
31
+ <img src="./qwen2-fine-tunes-maziyar-panahi.webp" alt="Qwen2 fine-tune" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
32
+
33
+ # MaziyarPanahi/Qwen2-7B-Instruct-v0.8
34
+
35
+ This is a fine-tuned version of the `Qwen/Qwen2-7B` model. It aims to improve the base model across all benchmarks.
36
+
37
+ # ⚡ Quantized GGUF
38
+
39
+ All GGUF models are available here: [MaziyarPanahi/Qwen2-7B-Instruct-v0.8](https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.8)
40
+
41
+ # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
42
+
43
+ coming soon!
44
+
45
+
46
+ # Prompt Template
47
+
48
+ This model uses `ChatML` prompt template:
49
+
50
+ ```
51
+ <|im_start|>system
52
+ {System}
53
+ <|im_end|>
54
+ <|im_start|>user
55
+ {User}
56
+ <|im_end|>
57
+ <|im_start|>assistant
58
+ {Assistant}
59
+ ````
60
+
61
+ # How to use
62
+
63
+
64
+ ```python
65
+
66
+ # Use a pipeline as a high-level helper
67
+
68
+ from transformers import pipeline
69
+
70
+ messages = [
71
+ {"role": "user", "content": "Who are you?"},
72
+ ]
73
+ pipe = pipeline("text-generation", model="MaziyarPanahi/Qwen2-7B-Instruct-v0.8")
74
+ pipe(messages)
75
+
76
+
77
+ # Load model directly
78
+
79
+ from transformers import AutoTokenizer, AutoModelForCausalLM
80
+
81
+ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Qwen2-7B-Instruct-v0.8")
82
+ model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Qwen2-7B-Instruct-v0.8")
83
+ ```