File size: 887 Bytes
0c99022 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
---
pipeline_tag: text-generation
tags:
- llama
- ggml
---
**Quantization from:**
[TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged](https://huggingface.co/TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged)
**Converted to the GGML format with:**
[llama.cpp master-b5fe67f (JUL 22, 2023)](https://github.com/ggerganov/llama.cpp/releases/tag/master-b5fe67f)
**Tested with:**
[koboldcpp 1.36](https://github.com/LostRuins/koboldcpp/releases/tag/v1.36)
**Example usage:**
```
koboldcpp.exe llama2-7b-chat-hf-codeCherryPop-qLoRA-merged-ggmlv3.Q6_K.bin --threads 6 --contextsize 4096 --stream --smartcontext --unbantokens --ropeconfig 1.0 10000 --noblas
```
**Tested with the following format (refer to the original model and [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) for additional details):**
```
### Instruction:
{code request}
### Response:
```
|