Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -12,9 +12,11 @@ tags:
|
|
12 |
license: apache-2.0
|
13 |
language:
|
14 |
- en
|
|
|
|
|
15 |
---
|
16 |
|
17 |
-
# qingy2024/QwQ-14B-Math-v0.2-
|
18 |
This model was converted to GGUF format from [`qingy2024/QwQ-14B-Math-v0.2`](https://huggingface.co/qingy2024/QwQ-14B-Math-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
19 |
Refer to the [original model card](https://huggingface.co/qingy2024/QwQ-14B-Math-v0.2) for more details on the model.
|
20 |
|
@@ -29,12 +31,12 @@ Invoke the llama.cpp server or the CLI.
|
|
29 |
|
30 |
### CLI:
|
31 |
```bash
|
32 |
-
llama-cli --hf-repo qingy2024/QwQ-14B-Math-v0.2-
|
33 |
```
|
34 |
|
35 |
### Server:
|
36 |
```bash
|
37 |
-
llama-server --hf-repo qingy2024/QwQ-14B-Math-v0.2-
|
38 |
```
|
39 |
|
40 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
@@ -51,9 +53,9 @@ cd llama.cpp && LLAMA_CURL=1 make
|
|
51 |
|
52 |
Step 3: Run inference through the main binary.
|
53 |
```
|
54 |
-
./llama-cli --hf-repo qingy2024/QwQ-14B-Math-v0.2-
|
55 |
```
|
56 |
or
|
57 |
```
|
58 |
-
./llama-server --hf-repo qingy2024/QwQ-14B-Math-v0.2-
|
59 |
```
|
|
|
12 |
license: apache-2.0
|
13 |
language:
|
14 |
- en
|
15 |
+
datasets:
|
16 |
+
- qingy2024/QwQ-LongCoT-Verified-130K
|
17 |
---
|
18 |
|
19 |
+
# qingy2024/QwQ-14B-Math-v0.2-Q6_K-GGUF
|
20 |
This model was converted to GGUF format from [`qingy2024/QwQ-14B-Math-v0.2`](https://huggingface.co/qingy2024/QwQ-14B-Math-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
21 |
Refer to the [original model card](https://huggingface.co/qingy2024/QwQ-14B-Math-v0.2) for more details on the model.
|
22 |
|
|
|
31 |
|
32 |
### CLI:
|
33 |
```bash
|
34 |
+
llama-cli --hf-repo qingy2024/QwQ-14B-Math-v0.2-Q6_K-GGUF --hf-file qwq-14b-math-v0.2-q6_k.gguf -p "The meaning to life and the universe is"
|
35 |
```
|
36 |
|
37 |
### Server:
|
38 |
```bash
|
39 |
+
llama-server --hf-repo qingy2024/QwQ-14B-Math-v0.2-Q6_K-GGUF --hf-file qwq-14b-math-v0.2-q6_k.gguf -c 2048
|
40 |
```
|
41 |
|
42 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
53 |
|
54 |
Step 3: Run inference through the main binary.
|
55 |
```
|
56 |
+
./llama-cli --hf-repo qingy2024/QwQ-14B-Math-v0.2-Q6_K-GGUF --hf-file qwq-14b-math-v0.2-q6_k.gguf -p "The meaning to life and the universe is"
|
57 |
```
|
58 |
or
|
59 |
```
|
60 |
+
./llama-server --hf-repo qingy2024/QwQ-14B-Math-v0.2-Q6_K-GGUF --hf-file qwq-14b-math-v0.2-q6_k.gguf -c 2048
|
61 |
```
|