--- pipeline_tag: text-generation tags: - llama - ggml --- **Quantization from:** [Tap-M/Luna-AI-Llama2-Uncensored](https://huggingface.co/Tap-M/Luna-AI-Llama2-Uncensored) **Converted to the GGML format with:** [llama.cpp master-294f424 (JUL 19, 2023)](https://github.com/ggerganov/llama.cpp/releases/tag/master-294f424) **Tested with:** [koboldcpp 1.35](https://github.com/LostRuins/koboldcpp/releases/tag/v1.35) **Example usage:** ``` koboldcpp.exe Luna-AI-Llama2-Uncensored-ggmlv3.Q2_K --threads 6 --stream --smartcontext --unbantokens --noblas ``` **Prompt format (refer to the original model for additional details):** ``` USER: {input} ASSISTANT: ```
(Clickable) I tested the model with the following format. This format was specified in the older version of the model's card. It works, but I'm leaving it behind the spoiler tag, as it's better to follow the format above to ensure the model works as intended. **Tested with the following format (refer to [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) for additional details):** ``` ### Instruction: You're a digital assistant designed to provide helpful and accurate responses to the user. ### Input: {input} ### Response: ```