Create README.md
#1
by
MaziyarPanahi
- opened
README.md
ADDED
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: text-generation
|
3 |
+
tags:
|
4 |
+
- qwen
|
5 |
+
- qwen-2
|
6 |
+
- quantized
|
7 |
+
- 2-bit
|
8 |
+
- 3-bit
|
9 |
+
- 4-bit
|
10 |
+
- 5-bit
|
11 |
+
- 6-bit
|
12 |
+
- 8-bit
|
13 |
+
- 16-bit
|
14 |
+
- GGUF
|
15 |
+
inference: false
|
16 |
+
model_creator: MaziyarPanahi
|
17 |
+
model_name: calme-2.2-qwen2-72b-GGUF
|
18 |
+
quantized_by: MaziyarPanahi
|
19 |
+
license: other
|
20 |
+
license_name: tongyi-qianwen
|
21 |
+
license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
|
22 |
+
---
|
23 |
+
|
24 |
+
|
25 |
+
# MaziyarPanahi/calme-2.2-qwen2-72b-GGUF
|
26 |
+
|
27 |
+
The GGUF and quantized models here are based on [MaziyarPanahi/calme-2.2-qwen2-72b](https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-72b) model
|
28 |
+
|
29 |
+
## How to download
|
30 |
+
You can download only the quants you need instead of cloning the entire repository as follows:
|
31 |
+
|
32 |
+
```
|
33 |
+
huggingface-cli download MaziyarPanahi/calme-2.2-qwen2-72b-GGUF --local-dir . --include '*Q2_K*gguf'
|
34 |
+
```
|
35 |
+
|
36 |
+
## Load GGUF models
|
37 |
+
|
38 |
+
|
39 |
+
```sh
|
40 |
+
./llama.cpp/main -m mode_name.Q2_K.gguf -p "<|im_start|>user\nJust say 1, 2, 3 hi and NOTHING else\n<|im_end|>\n<|im_start|>assistant\n" -n 1024
|
41 |
+
```
|
42 |
+
|
43 |
+
|
44 |
+
|
45 |
+
|
46 |
+
## Original README
|
47 |
+
|
48 |
+
---
|
49 |
+
|
50 |
+
# MaziyarPanahi/calme-2.2-qwen2-72b
|
51 |
+
|
52 |
+
This is a fine-tuned version of the `Qwen/Qwen2-72B-Instruct` model. It aims to improve the base model across all benchmarks.
|
53 |
+
|
54 |
+
# ⚡ Quantized GGUF
|
55 |
+
|
56 |
+
All GGUF models are available here: [MaziyarPanahi/calme-2.2-qwen2-72b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-72b-GGUF)
|
57 |
+
|
58 |
+
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
59 |
+
|
60 |
+
|
61 |
+
|
62 |
+
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|
63 |
+
|--------------|------:|------|-----:|------|-----:|---|-----:|
|
64 |
+
|truthfulqa_mc2| 2|none | 0|acc |0.6761|± |0.0148|
|
65 |
+
|
66 |
+
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|
67 |
+
|----------|------:|------|-----:|------|-----:|---|-----:|
|
68 |
+
|winogrande| 1|none | 5|acc |0.8248|± |0.0107|
|
69 |
+
|
70 |
+
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|
71 |
+
|-------------|------:|------|-----:|--------|-----:|---|-----:|
|
72 |
+
|arc_challenge| 1|none | 25|acc |0.6852|± |0.0136|
|
73 |
+
| | |none | 25|acc_norm|0.7184|± |0.0131|
|
74 |
+
|
75 |
+
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|
76 |
+
|-----|------:|----------------|-----:|-----------|-----:|---|-----:|
|
77 |
+
|gsm8k| 3|strict-match | 5|exact_match|0.8582|± |0.0096|
|
78 |
+
| | |flexible-extract| 5|exact_match|0.8893|± |0.0086|
|
79 |
+
|
80 |
+
# Prompt Template
|
81 |
+
|
82 |
+
This model uses `ChatML` prompt template:
|
83 |
+
|
84 |
+
```
|
85 |
+
<|im_start|>system
|
86 |
+
{System}
|
87 |
+
<|im_end|>
|
88 |
+
<|im_start|>user
|
89 |
+
{User}
|
90 |
+
<|im_end|>
|
91 |
+
<|im_start|>assistant
|
92 |
+
{Assistant}
|
93 |
+
````
|
94 |
+
|
95 |
+
# How to use
|
96 |
+
|
97 |
+
|
98 |
+
```python
|
99 |
+
|
100 |
+
# Use a pipeline as a high-level helper
|
101 |
+
|
102 |
+
from transformers import pipeline
|
103 |
+
|
104 |
+
messages = [
|
105 |
+
{"role": "user", "content": "Who are you?"},
|
106 |
+
]
|
107 |
+
pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.2-qwen2-72b")
|
108 |
+
pipe(messages)
|
109 |
+
|
110 |
+
|
111 |
+
# Load model directly
|
112 |
+
|
113 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
114 |
+
|
115 |
+
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.2-qwen2-72b")
|
116 |
+
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.2-qwen2-72b")
|
117 |
+
```
|