Edit model card

ditioner/gemma-2-9b-Q8_0-GGUF

This model was converted to GGUF format from unsloth/gemma-2-9b using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Downloads last month
13
GGUF
Model size
9.24B params
Architecture
gemma2

8-bit

Unable to determine this model’s pipeline type. Check the docs .

Quantized from