Update README.md
Browse files
README.md
CHANGED
@@ -21,42 +21,13 @@ tags:
|
|
21 |
This model was converted to GGUF format from [`EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1`](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
22 |
Refer to the [original model card](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1) for more details on the model.
|
23 |
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
---
|
35 |
-
|
36 |
-
|
37 |
Model details:
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
A RP/storywriting
|
50 |
specialist model, full-parameter finetune of Qwen2.5-7B on mixture of
|
51 |
synthetic and natural data.
|
52 |
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
It uses Celeste 70B
|
61 |
0.1 data mixture, greatly expanding it to improve
|
62 |
|
@@ -280,15 +251,6 @@ endeavors
|
|
280 |
|
281 |
|
282 |
---
|
283 |
-
|
284 |
-
|
285 |
-
|
286 |
-
|
287 |
-
|
288 |
-
|
289 |
-
|
290 |
-
|
291 |
-
p { line-height: 115%; margin-bottom: 0.25cm; background: transparent }
|
292 |
## Use with llama.cpp
|
293 |
Install llama.cpp through brew (works on Mac and Linux)
|
294 |
|
|
|
21 |
This model was converted to GGUF format from [`EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1`](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
22 |
Refer to the [original model card](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1) for more details on the model.
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
---
|
|
|
|
|
25 |
Model details:
|
|
|
|
|
26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
A RP/storywriting
|
28 |
specialist model, full-parameter finetune of Qwen2.5-7B on mixture of
|
29 |
synthetic and natural data.
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
It uses Celeste 70B
|
32 |
0.1 data mixture, greatly expanding it to improve
|
33 |
|
|
|
251 |
|
252 |
|
253 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
254 |
## Use with llama.cpp
|
255 |
Install llama.cpp through brew (works on Mac and Linux)
|
256 |
|