--- base_model: GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct language: - en - id - jv - su license: gemma tags: - llama-cpp - gguf --- # Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf This model was converted to GGUF format from [`GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct`](https://huggingface.co/GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct) using llama.cpp. Refer to the [original model card](https://huggingface.co/GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct) for more details on the model. ## Use with llama.cpp ### CLI: ```bash llama-cli --hf-repo Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file gemma2-9b-cpt-sahabatai-v1-instruct.q8_0.gguf -p "Your prompt here" ``` ### Server: ```bash llama-server --hf-repo Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file gemma2-9b-cpt-sahabatai-v1-instruct.q8_0.gguf -c 2048 ``` ## Model Details - **Quantization Type:** q8_0 - **Original Model:** [GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct](https://huggingface.co/GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct) - **Format:** GGUF