--- license: cc-by-nc-4.0 base_model: spow12/Ko-Qwen2-7B-Instruct tags: - gguf model-index: - name: joongi007/Ko-Qwen2-7B-Instruct-GGUF results: [] --- - Original model is [spow12/Ko-Qwen2-7B-Instruct](https://huggingface.co/spow12/Ko-Qwen2-7B-Instruct) - quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp) - [b3510](https://github.com/ggerganov/llama.cpp/releases/tag/b3510) ```prompt <|im_start|>system {System}<|im_end|> <|im_start|>user {User}<|im_end|> <|im_start|>assistant {Assistant} ``` ~~"Flash Attention" function must be activated. [why?](https://www.reddit.com/r/LocalLLaMA/comments/1da19nu/if_your_qwen2_gguf_is_spitting_nonsense_enable/)~~