Xin Liu
commited on
Commit
•
81d8db5
1
Parent(s):
3670b9d
Update
Browse filesSigned-off-by: Xin Liu <sam@secondstate.io>
README.md
CHANGED
@@ -1,3 +1,80 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model: cognitivecomputations/dolphin-2.7-mixtral-8x7b
|
3 |
+
datasets:
|
4 |
+
- ehartford/dolphin
|
5 |
+
- jondurbin/airoboros-2.2.1
|
6 |
+
- ehartford/dolphin-coder
|
7 |
+
- teknium/openhermes
|
8 |
+
- ise-uiuc/Magicoder-OSS-Instruct-75K
|
9 |
+
- ise-uiuc/Magicoder-Evol-Instruct-110K
|
10 |
+
- LDJnr/Capybara
|
11 |
+
inference: false
|
12 |
+
language:
|
13 |
+
- en
|
14 |
license: apache-2.0
|
15 |
+
model_creator: Cognitive Computations
|
16 |
+
model_name: Dolphin 2.7 Mixtral 8X7B
|
17 |
+
model_type: mixtral
|
18 |
+
quantized_by: Second State Inc.
|
19 |
---
|
20 |
+
|
21 |
+
<!-- header start -->
|
22 |
+
<!-- 200823 -->
|
23 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
24 |
+
<img src="https://github.com/second-state/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
25 |
+
</div>
|
26 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
27 |
+
<!-- header end -->
|
28 |
+
|
29 |
+
# Dolphin-2.7-mixtral-8x7b-GGUF
|
30 |
+
|
31 |
+
## Original Model
|
32 |
+
|
33 |
+
[cognitivecomputations/dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b)
|
34 |
+
|
35 |
+
## Run with LlamaEdge
|
36 |
+
|
37 |
+
- LlamaEdge version: [v0.2.4](https://github.com/second-state/LlamaEdge/releases/tag/0.2.4)
|
38 |
+
|
39 |
+
- Prompt template
|
40 |
+
|
41 |
+
- Prompt type: `chatml`
|
42 |
+
|
43 |
+
- Prompt string
|
44 |
+
|
45 |
+
```text
|
46 |
+
<|im_start|>system
|
47 |
+
{system_message}<|im_end|>
|
48 |
+
<|im_start|>user
|
49 |
+
{prompt}<|im_end|>
|
50 |
+
<|im_start|>assistant
|
51 |
+
```
|
52 |
+
|
53 |
+
- Run as LlamaEdge service
|
54 |
+
|
55 |
+
```bash
|
56 |
+
wasmedge --dir .:. --nn-preload default:GGML:AUTO:dolphin-2.7-mixtral-8x7b-Q5_K_M.gguf llama-api-server.wasm -p chatml
|
57 |
+
```
|
58 |
+
|
59 |
+
- Run as LlamaEdge command app
|
60 |
+
|
61 |
+
```bash
|
62 |
+
wasmedge --dir .:. --nn-preload default:GGML:AUTO:dolphin-2.7-mixtral-8x7b-Q5_K_M.gguf llama-chat.wasm -p chatml
|
63 |
+
```
|
64 |
+
|
65 |
+
## Quantized GGUF Models
|
66 |
+
|
67 |
+
| Name | Quant method | Bits | Size | Use case |
|
68 |
+
| ---- | ---- | ---- | ---- | ----- |
|
69 |
+
| [dolphin-2.7-mixtral-8x7b-Q2_K.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q2_K.gguf) | Q2_K | 2 | 15.6 GB| smallest, significant quality loss - not recommended for most purposes |
|
70 |
+
| [dolphin-2.7-mixtral-8x7b-Q3_K_L.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q3_K_L.gguf) | Q3_K_L | 3 | 20.4 GB| small, substantial quality loss |
|
71 |
+
| [dolphin-2.7-mixtral-8x7b-Q3_K_M.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q3_K_M.gguf) | Q3_K_M | 3 | 20.4 GB| very small, high quality loss |
|
72 |
+
| [dolphin-2.7-mixtral-8x7b-Q3_K_S.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q3_K_S.gguf) | Q3_K_S | 3 | 20.3 GB| very small, high quality loss |
|
73 |
+
| [dolphin-2.7-mixtral-8x7b-Q4_0.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q4_0.gguf) | Q4_0 | 4 | 26.4 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
|
74 |
+
| [dolphin-2.7-mixtral-8x7b-Q4_K_M.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q4_K_M.gguf) | Q4_K_M | 4 | 26.4 GB| medium, balanced quality - recommended |
|
75 |
+
| [dolphin-2.7-mixtral-8x7b-Q4_K_S.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q4_K_S.gguf) | Q4_K_S | 4 | 26.4 GB| small, greater quality loss |
|
76 |
+
| [dolphin-2.7-mixtral-8x7b-Q5_0.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q5_0.gguf) | Q5_0 | 5 | 32.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
|
77 |
+
| [dolphin-2.7-mixtral-8x7b-Q5_K_M.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q5_K_M.gguf) | Q5_K_M | 5 | 32.2 GB| large, very low quality loss - recommended |
|
78 |
+
| [dolphin-2.7-mixtral-8x7b-Q5_K_S.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q5_K_S.gguf) | Q5_K_S | 5 | 32.2 GB| large, low quality loss - recommended |
|
79 |
+
| [dolphin-2.7-mixtral-8x7b-Q6_K.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q6_K.gguf) | Q6_K | 6 | 38.4 GB| very large, extremely low quality loss |
|
80 |
+
| [dolphin-2.7-mixtral-8x7b-Q8_0.gguf](https://huggingface.co/second-state/Dolphin-2.7-mixtral-8x7b-GGUF/blob/main/dolphin-2.7-mixtral-8x7b-Q8_0.gguf) | Q8_0 | 8 | 49.6 GB| very large, extremely low quality loss - not recommended |
|