Update README.md
Browse files
README.md
CHANGED
@@ -13,13 +13,8 @@ This model was converted to GGUF format from [`jackboot/uwu-qwen-32b`](https://h
|
|
13 |
Refer to the [original model card](https://huggingface.co/jackboot/uwu-qwen-32b) for more details on the model.
|
14 |
|
15 |
## Use with llama.cpp
|
16 |
-
Install llama.cpp through brew (works on Mac and Linux)
|
17 |
|
18 |
-
|
19 |
-
brew install llama.cpp
|
20 |
-
|
21 |
-
```
|
22 |
-
Invoke the llama.cpp server or the CLI.
|
23 |
|
24 |
### CLI:
|
25 |
```bash
|
|
|
13 |
Refer to the [original model card](https://huggingface.co/jackboot/uwu-qwen-32b) for more details on the model.
|
14 |
|
15 |
## Use with llama.cpp
|
|
|
16 |
|
17 |
+
I hope that it's an up to date enough l.cpp. In ooba you can swap the tokenizers around using llama.cpp HF. This was created with the default mergekit tokenizer which uses the QWQ bos token.
|
|
|
|
|
|
|
|
|
18 |
|
19 |
### CLI:
|
20 |
```bash
|