--- license: gemma base_model: google/gemma-2-9b-it --- GGUF [llama.cpp](https://github.com/ggerganov/llama.cpp) quantized version of: - Original model: [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) - Model creator: [Google](https://huggingface.co/google) - [License](https://www.kaggle.com/models/google/gemma/license/consent?verifyToken=CfDJ8OV3w-Vr_2dIpZxXY9wVZZnpWKdFS3kJvSU2XkwpfOZICBFcOxoYJFb12HJj1BQs9FHgrjqpbEoqYjxdMwgaew-eH8JJmsLOgj56rjNeDFWaxTA36ggVQ1RJsKmH0mbl74o1qgioqSV5ktl-J5ebL9ep3JmOojU1HdBDSScB6WyGDSIuAcw8MWuy9LEE74Ze) ## Recommended Prompt Format (Gemma) ``` model Provide some context and/or instructions to the model.model user The user’s message goes here model AI message goes heremodel ``` Quant Version: [b3405](https://github.com/ggerganov/llama.cpp/releases/tag/b3405) with [imatrix](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)