Update README.md
Browse files
README.md
CHANGED
@@ -1,35 +1 @@
|
|
1 |
-
|
2 |
-
tags:
|
3 |
-
- llama-cpp
|
4 |
-
- gguf-my-repo
|
5 |
-
---
|
6 |
-
|
7 |
-
# WesPro/Wizardbeta-Q5_K_M-GGUF
|
8 |
-
This model was converted to GGUF format from [`WesPro/Wizardbeta`](https://huggingface.co/WesPro/Wizardbeta) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
9 |
-
Refer to the [original model card](https://huggingface.co/WesPro/Wizardbeta) for more details on the model.
|
10 |
-
## Use with llama.cpp
|
11 |
-
|
12 |
-
Install llama.cpp through brew.
|
13 |
-
|
14 |
-
```bash
|
15 |
-
brew install ggerganov/ggerganov/llama.cpp
|
16 |
-
```
|
17 |
-
Invoke the llama.cpp server or the CLI.
|
18 |
-
|
19 |
-
CLI:
|
20 |
-
|
21 |
-
```bash
|
22 |
-
llama-cli --hf-repo WesPro/Wizardbeta-Q5_K_M-GGUF --model wizardbeta.Q5_K_M.gguf -p "The meaning to life and the universe is"
|
23 |
-
```
|
24 |
-
|
25 |
-
Server:
|
26 |
-
|
27 |
-
```bash
|
28 |
-
llama-server --hf-repo WesPro/Wizardbeta-Q5_K_M-GGUF --model wizardbeta.Q5_K_M.gguf -c 2048
|
29 |
-
```
|
30 |
-
|
31 |
-
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
32 |
-
|
33 |
-
```
|
34 |
-
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m wizardbeta.Q5_K_M.gguf -n 128
|
35 |
-
```
|
|
|
1 |
+
Q5_K_M GGUF version of Wizard-Kun-Lake_3x7B-MoE (https://huggingface.co/WesPro/Wizard-Kun-Lake_3x7B-MoE). It's meant to be used for RP or at least that's what I made it for.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|