|
--- |
|
library_name: transformers |
|
license: other |
|
license_name: qwen |
|
license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE |
|
base_model: Sao10K/14B-Qwen2.5-Kunou-v1 |
|
tags: |
|
- generated_from_trainer |
|
- llama-cpp |
|
- gguf-my-repo |
|
model-index: |
|
- name: 14B-Qwen2.5-Kunou-v1 |
|
results: [] |
|
--- |
|
|
|
# Triangle104/14B-Qwen2.5-Kunou-v1-Q8_0-GGUF |
|
This model was converted to GGUF format from [`Sao10K/14B-Qwen2.5-Kunou-v1`](https://huggingface.co/Sao10K/14B-Qwen2.5-Kunou-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. |
|
Refer to the [original model card](https://huggingface.co/Sao10K/14B-Qwen2.5-Kunou-v1) for more details on the model. |
|
|
|
--- |
|
Model details: |
|
- |
|
I do not really have anything planned for this model other than it being a generalist, and Roleplay Model? It was just something made and planned in minutes. |
|
This is the little sister variant, the small 14B version. |
|
Kunou's the name of an OC I worked on for a couple of years, for a... fanfic. mmm... |
|
|
|
A kind-of successor to my smaller model series. It works pretty nicely I think? |
|
This version is basically a better, more cleaned up Dataset used on Euryale and Stheno. |
|
|
|
Recommended Model Settings | Look, I just use these, they work fine enough. I don't even know how DRY or other meme samplers work. Your system prompt matters more anyway. |
|
|
|
Prompt Format: ChatML |
|
Temperature: 1.1 |
|
min_p: 0.1 |
|
|
|
Special thanks to my wallet for funding this, my juniors who share a single braincell between them, and my current national service. |
|
There have been more stabbings, accidents out there this winter season. It's wild. Stay safe everyone. |
|
|
|
Also sorry for the inactivity. Life was in the way. It still is, just less so, for now. Burnout is a thing, huh? |
|
|
|
https://sao10k.carrd.co/ for contact. |
|
|
|
--- |
|
## Use with llama.cpp |
|
Install llama.cpp through brew (works on Mac and Linux) |
|
|
|
```bash |
|
brew install llama.cpp |
|
|
|
``` |
|
Invoke the llama.cpp server or the CLI. |
|
|
|
### CLI: |
|
```bash |
|
llama-cli --hf-repo Triangle104/14B-Qwen2.5-Kunou-v1-Q8_0-GGUF --hf-file 14b-qwen2.5-kunou-v1-q8_0.gguf -p "The meaning to life and the universe is" |
|
``` |
|
|
|
### Server: |
|
```bash |
|
llama-server --hf-repo Triangle104/14B-Qwen2.5-Kunou-v1-Q8_0-GGUF --hf-file 14b-qwen2.5-kunou-v1-q8_0.gguf -c 2048 |
|
``` |
|
|
|
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. |
|
|
|
Step 1: Clone llama.cpp from GitHub. |
|
``` |
|
git clone https://github.com/ggerganov/llama.cpp |
|
``` |
|
|
|
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). |
|
``` |
|
cd llama.cpp && LLAMA_CURL=1 make |
|
``` |
|
|
|
Step 3: Run inference through the main binary. |
|
``` |
|
./llama-cli --hf-repo Triangle104/14B-Qwen2.5-Kunou-v1-Q8_0-GGUF --hf-file 14b-qwen2.5-kunou-v1-q8_0.gguf -p "The meaning to life and the universe is" |
|
``` |
|
or |
|
``` |
|
./llama-server --hf-repo Triangle104/14B-Qwen2.5-Kunou-v1-Q8_0-GGUF --hf-file 14b-qwen2.5-kunou-v1-q8_0.gguf -c 2048 |
|
``` |
|
|