---
tags:
- w4a16
- int4
- vllm
license: apache-2.0
license_link: https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
language:
- en
base_model: ibm-granite/granite-3.1-8b-base
library_name: transformers
---
# granite-3.1-8b-base-quantized.w4a16
## Model Overview
- **Model Architecture:** granite-3.1-8b-base
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT4
- **Activation quantization:** INT4
- **Release Date:** 1/8/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [ibm-granite/granite-3.1-8b-base](https://huggingface.co/ibm-granite/granite-3.1-8b-base).
It achieves an average score of 69.81 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 70.30.
### Model Optimizations
This model was obtained by quantizing the weights of [ibm-granite/granite-3.1-8b-base](https://huggingface.co/ibm-granite/granite-3.1-8b-base) to INT4 data type, ready for inference with vLLM >= 0.5.2.
This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%. Only the weights of the linear operators within transformers blocks are quantized.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
max_model_len, tp_size = 4096, 1
model_name = "neuralmagic/granite-3.1-8b-base-quantized.w4a16"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
messages_list = [
[{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
]
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
Model Creation Code
```bash
python quantize.py --model_path ibm-granite/granite-3.1-8b-base --quant_path "output_dir/granite-3.1-8b-base-quantized.w4a16" --calib_size 3072 --dampening_frac 0.1 --observer mse --actorder static
```
```python
from datasets import load_dataset
from transformers import AutoTokenizer
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot, apply
import argparse
from compressed_tensors.quantization import QuantizationScheme, QuantizationArgs, QuantizationType, QuantizationStrategy
parser = argparse.ArgumentParser()
parser.add_argument('--model_path', type=str)
parser.add_argument('--quant_path', type=str)
parser.add_argument('--calib_size', type=int, default=256)
parser.add_argument('--dampening_frac', type=float, default=0.1)
parser.add_argument('--observer', type=str, default="minmax")
parser.add_argument('--actorder', type=str, default="dynamic")
args = parser.parse_args()
model = SparseAutoModelForCausalLM.from_pretrained(
args.model_path,
device_map="auto",
torch_dtype="auto",
use_cache=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(args.model_path)
NUM_CALIBRATION_SAMPLES = args.calib_size
DATASET_ID = "neuralmagic/LLM_compression_calibration"
DATASET_SPLIT = "train"
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))
def preprocess(example):
concat_txt = example["Instruction"] + "\n" + example["output"]
return {"text": concat_txt}
ds = ds.map(preprocess)
def tokenize(sample):
return tokenizer(
sample["text"],
padding=False,
truncation=False,
add_special_tokens=True,
)
ds = ds.map(tokenize, remove_columns=ds.column_names)
recipe = [
GPTQModifier(
targets=["Linear"],
ignore=["lm_head"],
scheme="w4a16",
dampening_frac=args.dampening_frac,
observer=args.observer,
actorder=args.actorder,
)
]
oneshot(
model=model,
dataset=ds,
recipe=recipe,
num_calibration_samples=args.calib_size,
max_seq_length=8196,
)
# Save to disk compressed.
model.save_pretrained(quant_path, save_compressed=True)
tokenizer.save_pretrained(quant_path)
```
Evaluation Commands
OpenLLM Leaderboard V1:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/granite-3.1-8b-base-quantized.w4a16",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks openllm \
--write_out \
--batch_size auto \
--output_path output_dir \
--show_config
```
OpenLLM Leaderboard V2:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/granite-3.1-8b-base-quantized.w4a16",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks leaderboard \
--write_out \
--batch_size auto \
--output_path output_dir \
--show_config
```
#### HumanEval
##### Generation
```
python3 codegen/generate.py \
--model neuralmagic/granite-3.1-8b-base-quantized.w4a16 \
--bs 16 \
--temperature 0.2 \
--n_samples 50 \
--root "." \
--dataset humaneval
```
##### Sanitization
```
python3 evalplus/sanitize.py \
humaneval/neuralmagic--granite-3.1-8b-base-quantized.w4a16_vllm_temp_0.2
```
##### Evaluation
```
evalplus.evaluate \
--dataset humaneval \
--samples humaneval/neuralmagic--granite-3.1-8b-base-quantized.w4a16_vllm_temp_0.2-sanitized
```
Category | Metric | ibm-granite/granite-3.1-8b-base | neuralmagic/granite-3.1-8b-base-quantized.w4a16 | Recovery (%) |
---|---|---|---|---|
OpenLLM V1 | ARC-Challenge (Acc-Norm, 25-shot) | 64.68 | 62.37 | 96.43 |
GSM8K (Strict-Match, 5-shot) | 60.88 | 54.89 | 90.16 | |
HellaSwag (Acc-Norm, 10-shot) | 83.52 | 82.53 | 98.81 | |
MMLU (Acc, 5-shot) | 63.33 | 62.78 | 99.13 | |
TruthfulQA (MC2, 0-shot) | 51.33 | 51.30 | 99.94 | |
Winogrande (Acc, 5-shot) | 80.90 | 79.24 | 97.95 | |
Average Score | 67.44 | 65.52 | 97.15 | |
Coding | HumanEval Pass@1 | 44.10 | 40.70 | 92.28 |
Latency (s) | |||||||||
---|---|---|---|---|---|---|---|---|---|
GPU class | Model | Speedup | Code Completion prefill: 256 tokens decode: 1024 tokens |
Docstring Generation prefill: 768 tokens decode: 128 tokens |
Code Fixing prefill: 1024 tokens decode: 1024 tokens |
RAG prefill: 1024 tokens decode: 128 tokens |
Instruction Following prefill: 256 tokens decode: 128 tokens |
Multi-turn Chat prefill: 512 tokens decode: 256 tokens |
Large Summarization prefill: 4096 tokens decode: 512 tokens |
A5000 | granite-3.1-8b-base | 28.3 | 3.7 | 28.8 | 3.8 | 3.6 | 7.2 | 15.7 | |
granite-3.1-8b-base-quantized.w8a8 | 1.60 | 17.7 | 2.3 | 18.0 | 2.4 | 2.2 | 4.5 | 10.0 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
2.61 | 10.3 | 1.5 | 10.7 | 1.5 | 1.3 | 2.7 | 6.6 | |
A6000 | granite-3.1-8b-base | 25.8 | 3.4 | 26.2 | 3.4 | 3.3 | 6.5 | 14.2 | |
granite-3.1-8b-base-quantized.w8a8 | 1.50 | 17.4 | 2.3 | 16.9 | 2.2 | 2.2 | 4.4 | 9.8 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
2.48 | 10.0 | 1.4 | 10.4 | 1.5 | 1.3 | 2.5 | 6.2 | |
A100 | granite-3.1-8b-base | 13.6 | 1.8 | 13.7 | 1.8 | 1.7 | 3.4 | 7.3 | |
granite-3.1-8b-base-quantized.w8a8 | 1.31 | 10.4 | 1.3 | 10.5 | 1.4 | 1.3 | 2.6 | 5.6 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
1.80 | 7.3 | 1.0 | 7.4 | 1.0 | 0.9 | 1.9 | 4.3 | |
L40 | granite-3.1-8b-base | 25.1 | 3.2 | 25.3 | 3.2 | 3.2 | 6.3 | 13.4 | |
granite-3.1-8b-base-FP8-dynamic | 1.47 | 16.8 | 2.2 | 17.1 | 2.2 | 2.1 | 4.2 | 9.3 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
2.72 | 8.9 | 1.2 | 9.2 | 1.2 | 1.1 | 2.3 | 5.3 |
Maximum Throughput (Queries per Second) | |||||||||
---|---|---|---|---|---|---|---|---|---|
GPU class | Model | Speedup | Code Completion prefill: 256 tokens decode: 1024 tokens |
Docstring Generation prefill: 768 tokens decode: 128 tokens |
Code Fixing prefill: 1024 tokens decode: 1024 tokens |
RAG prefill: 1024 tokens decode: 128 tokens |
Instruction Following prefill: 256 tokens decode: 128 tokens |
Multi-turn Chat prefill: 512 tokens decode: 256 tokens |
Large Summarization prefill: 4096 tokens decode: 512 tokens |
A5000 | granite-3.1-8b-base | 0.8 | 3.1 | 0.4 | 2.5 | 6.7 | 2.7 | 0.3 | |
granite-3.1-8b-base-quantized.w8a8 | 1.71 | 1.3 | 5.2 | 0.9 | 4.0 | 10.5 | 4.4 | 0.5 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
1.46 | 1.3 | 3.9 | 0.8 | 2.9 | 8.2 | 3.6 | 0.5 | |
A6000 | granite-3.1-8b-base | 1.3 | 5.1 | 0.9 | 4.0 | 0.3 | 4.3 | 0.6 | |
granite-3.1-8b-base-quantized.w8a8 | 1.39 | 1.8 | 7.0 | 1.3 | 5.6 | 14.0 | 6.3 | 0.8 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
1.09 | 1.9 | 4.8 | 1.0 | 3.8 | 10.0 | 5.0 | 0.6 | |
A100 | granite-3.1-8b-base | 3.1 | 10.7 | 2.1 | 8.5 | 20.6 | 9.6 | 1.4 | |
granite-3.1-8b-base-quantized.w8a8 | 1.23 | 3.8 | 14.2 | 2.1 | 11.4 | 25.9 | 12.1 | 1.7 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
0.96 | 3.4 | 9.0 | 2.6 | 7.2 | 18.0 | 8.8 | 1.3 | |
L40 | granite-3.1-8b-base | 1.4 | 7.8 | 1.1 | 6.2 | 15.5 | 6.0 | 0.7 | |
granite-3.1-8b-base-FP8-dynamic | 1.12 | 2.1 | 7.4 | 1.3 | 5.9 | 15.3 | 6.9 | 0.8 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
1.29 | 2.4 | 8.9 | 1.4 | 7.1 | 17.8 | 7.8 | 1.0 |