Model name typo in README
#1
by
kepkar
- opened
README.md
CHANGED
@@ -36,12 +36,12 @@ We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and
|
|
36 |
## How to use
|
37 |
Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
|
38 |
```shell
|
39 |
-
huggingface-cli download Qwen/CodeQwen1.5-7B-Chat-GGUF
|
40 |
```
|
41 |
|
42 |
We demonstrate how to use `llama.cpp` to run Qwen1.5:
|
43 |
```shell
|
44 |
-
./main -m
|
45 |
```
|
46 |
|
47 |
|
|
|
36 |
## How to use
|
37 |
Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
|
38 |
```shell
|
39 |
+
huggingface-cli download Qwen/CodeQwen1.5-7B-Chat-GGUF codeqwen-1_5-7b-chat-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
|
40 |
```
|
41 |
|
42 |
We demonstrate how to use `llama.cpp` to run Qwen1.5:
|
43 |
```shell
|
44 |
+
./main -m codeqwen-1_5-7b-chat-q5_k_m.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt
|
45 |
```
|
46 |
|
47 |
|