FocusMix 7B GGUF
Using llama.cpp release b3557 for static quantization.
Original model: https://huggingface.co/Nelathan/Qwen2-7B-FocusMix
ChatML
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Nelathan/Qwen2-7b-FocusMix-GGUF
Base model
Nelathan/Qwen2-7B-FocusMix