Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
neopolita
/
gemma-2-9b-gguf
like
2
GGUF
Inference Endpoints
Model card
Files
Files and versions
Community
Deploy
Use this model
c7099e3
gemma-2-9b-gguf
1 contributor
History:
9 commits
neopolita
Upload gemma-2-9b_q8_0.gguf with huggingface_hub
c7099e3
verified
4 months ago
.gitattributes
1.92 kB
Upload gemma-2-9b_q8_0.gguf with huggingface_hub
4 months ago
README.md
1.4 kB
Upload README.md with huggingface_hub
4 months ago
gemma-2-9b_q2_k.gguf
3.81 GB
LFS
Upload gemma-2-9b_q2_k.gguf with huggingface_hub
4 months ago
gemma-2-9b_q3_k_m.gguf
4.76 GB
LFS
Upload gemma-2-9b_q3_k_m.gguf with huggingface_hub
4 months ago
gemma-2-9b_q4_k_m.gguf
5.76 GB
LFS
Upload gemma-2-9b_q4_k_m.gguf with huggingface_hub
4 months ago
gemma-2-9b_q5_k_m.gguf
6.65 GB
LFS
Upload gemma-2-9b_q5_k_m.gguf with huggingface_hub
4 months ago
gemma-2-9b_q6_k.gguf
7.59 GB
LFS
Upload gemma-2-9b_q6_k.gguf with huggingface_hub
4 months ago
gemma-2-9b_q8_0.gguf
9.83 GB
LFS
Upload gemma-2-9b_q8_0.gguf with huggingface_hub
4 months ago
ggml-model-f16.gguf
18.5 GB
LFS
Upload ggml-model-f16.gguf with huggingface_hub
4 months ago