--- pipeline_tag: text-generation tags: - llama - ggml --- **Quantization from:** [bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16) **Converted to the GGML format with:** [llama.cpp master-6e7cca4 (JUL 15, 2023)](https://github.com/ggerganov/llama.cpp/releases/tag/master-6e7cca4) **Tested with:** [koboldcpp 1.35](https://github.com/LostRuins/koboldcpp/releases/tag/v1.35) **Example usage:** ``` koboldcpp.exe airoboros-33b-gpt4-1.4.1-PI-8192-ggmlv3.Q2_K.bin --threads 6 --linearrope --contextsize 8192 --stream --smartcontext --unbantokens --noblas ```