File size: 676 Bytes
95f8784
498d3f7
95f8784
 
 
 
 
 
 
 
 
aba3b79
95f8784
 
 
 
 
 
 
 
cbf0b30
095d0be
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
---
license: cc-by-nc-4.0
base_model: spow12/Ko-Qwen2-7B-Instruct
tags:
- gguf
model-index:
- name: joongi007/Ko-Qwen2-7B-Instruct-GGUF
  results: []
---

- Original model is [spow12/Ko-Qwen2-7B-Instruct](https://huggingface.co/spow12/Ko-Qwen2-7B-Instruct)
- quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp) - [b3510](https://github.com/ggerganov/llama.cpp/releases/tag/b3510)

```prompt
<|im_start|>system
{System}<|im_end|>
<|im_start|>user
{User}<|im_end|>
<|im_start|>assistant
{Assistant}
```
~~"Flash Attention" function must be activated. [why?](https://www.reddit.com/r/LocalLLaMA/comments/1da19nu/if_your_qwen2_gguf_is_spitting_nonsense_enable/)~~