Edit model card

Llamacpp Quantizations of Meta-Llama-3.1-8B

Using llama.cpp release b3583 for quantization.

Original model: https://huggingface.co/google/gemma-2-9b

Download a file (not the whole branch) from below:

Filename Quant type File Size Perplexity (wikitext-2-raw-v1.test)
gemma-2-9b.FP32.gguf FP32 37.00GB 6.9209 +/- 0.04660
gemma-2-9b-Q8_0.gguf Q8_0 9.83GB 6.9222 +/- 0.04660
gemma-2-9b-Q6_K.gguf Q6_K 7.59GB 6.9353 +/- 0.04675
gemma-2-9b-Q5_K_M.gguf Q5_K_M 6.65GB 6.9571 +/- 0.04687
gemma-2-9b-Q5_K_S.gguf Q5_K_S 6.48GB 6.9623 +/- 0.04690
gemma-2-9b-Q4_K_M.gguf Q4_K_M 5.76GB 7.0220 +/- 0.04737
gemma-2-9b-Q4_K_S.gguf Q4_K_S 5.48GB 7.0622 +/- 0.04777
gemma-2-9b-Q3_K_L.gguf Q3_K_L 5.13GB 7.2144 +/- 0.04910
gemma-2-9b-Q3_K_M.gguf Q3_K_M 4.76GB 7.2849 +/- 0.04970
gemma-2-9b-Q3_K_S.gguf Q3_K_S 4.34GB 7.6869 +/- 0.05373
gemma-2-9b-Q2_K.gguf Q2_K 3.81GB 8.7979 +/- 0.06191

Benchmark Results

Benchmark Quant type Metric
WinoGrande (0-shot) Q8_0 74.4278 +/- 1.2261
WinoGrande (0-shot) Q4_K_M 74.8224 +/- 1.2198
WinoGrande (0-shot) Q3_K_M 74.1910 +/- 1.2298
WinoGrande (0-shot) Q3_K_S 72.6125 +/- 1.2533
WinoGrande (0-shot) Q2_K 71.4286 +/- 1.2697
HellaSwag (0-shot) Q8_0 78.39075881
HellaSwag (0-shot) Q4_K_M 77.87293368
HellaSwag (0-shot) Q3_K_M 76.64807807
HellaSwag (0-shot) Q3_K_S 76.08046206
HellaSwag (0-shot) Q2_K 73.07309301
MMLU (0-shot) Q8_0 42.5065 +/- 1.2569
MMLU (0-shot) Q4_K_M 42.5065 +/- 1.2569
MMLU (0-shot) Q3_K_M 41.3437 +/- 1.2520
MMLU (0-shot) Q3_K_S 40.5685 +/- 1.2484
MMLU (0-shot) Q2_K 38.1137 +/- 1.2348

Downloading using huggingface-cli

First, make sure you have hugginface-cli installed:

pip install -U "huggingface_hub[cli]"

Then, you can target the specific file you want:

huggingface-cli download fedric95/gemma-2-9b-GGUF --include "gemma-2-9b-Q4_K_M.gguf" --local-dir ./

If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:

huggingface-cli download fedric95/gemma-2-9b-GGUF --include "gemma-2-9b-Q8_0.gguf/*" --local-dir gemma-2-9b-Q8_0

You can either specify a new local-dir (gemma-2-9b-Q8_0) or download them all in place (./)

Reproducibility

https://github.com/ggerganov/llama.cpp/discussions/9020#discussioncomment-10335638

Downloads last month
305
GGUF
Model size
9.24B params
Architecture
gemma2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for fedric95/gemma-2-9b-GGUF

Base model

google/gemma-2-9b
Quantized
(25)
this model