MaziyarPanahi
commited on
Commit
•
20bcd23
1
Parent(s):
48878ac
Update README.md (#3)
Browse files- Update README.md (30dee7c55fc8325a84554dfcb95ccc87cbc8ea7b)
README.md
CHANGED
@@ -1,13 +1,49 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
language:
|
4 |
-
- en
|
5 |
- fr
|
|
|
6 |
- es
|
7 |
-
- de
|
8 |
- it
|
|
|
9 |
---
|
10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
|
13 |
Original README
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
base_model: mistralai/Mixtral-8x22B-Instruct-v0.1
|
4 |
+
inference: false
|
5 |
+
model_creator: MaziyarPanahi
|
6 |
+
model_name: Mixtral-8x22B-Instruct-v0.1-GGUF
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
quantized_by: MaziyarPanahi
|
9 |
+
tags:
|
10 |
+
- quantized
|
11 |
+
- 2-bit
|
12 |
+
- 3-bit
|
13 |
+
- 4-bit
|
14 |
+
- 5-bit
|
15 |
+
- 6-bit
|
16 |
+
- 8-bit
|
17 |
+
- 16-bit
|
18 |
+
- GGUF
|
19 |
+
- mixtral
|
20 |
+
- moe
|
21 |
language:
|
|
|
22 |
- fr
|
23 |
+
- en
|
24 |
- es
|
|
|
25 |
- it
|
26 |
+
- de
|
27 |
---
|
28 |
|
29 |
+
# Mixtral-8x22B-Instruct-v0.1-GGUF
|
30 |
+
|
31 |
+
The GGUF and quantized models here are based on [mistralai/Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) model
|
32 |
+
|
33 |
+
## How to download
|
34 |
+
You can download only the quants you need instead of cloning the entire repository as follows:
|
35 |
+
|
36 |
+
```
|
37 |
+
huggingface-cli download MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF --local-dir . --include '*Q2_K*gguf'
|
38 |
+
```
|
39 |
+
|
40 |
+
## Load sharded model
|
41 |
+
|
42 |
+
`llama_load_model_from_file` will detect the number of files and will load additional tensors from the rest of files.
|
43 |
+
|
44 |
+
```sh
|
45 |
+
llama.cpp/main -m Mixtral-8x22B-Instruct-v0.1.Q2_K-00001-of-00005.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 1024 -e
|
46 |
+
```
|
47 |
|
48 |
|
49 |
Original README
|