granite-3.1-8b-base-FP8-dynamic

Model Overview

  • Model Architecture: granite-3.1-8b-base
    • Input: Text
    • Output: Text
  • Model Optimizations:
    • Weight quantization: FP8
    • Activation quantization: FP8
  • Release Date: 1/8/2025
  • Version: 1.0
  • Model Developers: Neural Magic

Quantized version of ibm-granite/granite-3.1-8b-base. It achieves an average score of xxxx on the OpenLLM benchmark (version 1), whereas the unquantized model achieves xxxx.

Model Optimizations

This model was obtained by quantizing the weights and activations of ibm-granite/granite-3.1-8b-base to FP8 data type, ready for inference with vLLM >= 0.5.2. This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.

Deployment

Use with vLLM

This model can be deployed efficiently using the vLLM backend, as shown in the example below.

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

max_model_len, tp_size = 4096, 1
model_name = "neuralmagic/granite-3.1-8b-base-FP8-dynamic"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])

messages_list = [
    [{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
]

prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]

outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)

generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)

vLLM also supports OpenAI-compatible serving. See the documentation for more details.

Creation

This model was created with llm-compressor by running the code snippet below.

Model Creation Code
python quantize.py --model_id ibm-granite/granite-3.1-8b-base --save_path "output_dir/"
import argparse
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
import os

def main():
    parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8')
    parser.add_argument('--model_id', type=str, required=True,
                        help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-8B-base")')
    parser.add_argument('--save_path', type=str, default='.',
                        help='Custom path to save the quantized model. If not provided, will use model_name-FP8-dynamic')
    args = parser.parse_args()

    # Load model
    model = AutoModelForCausalLM.from_pretrained(
        args.model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True,
    )
    tokenizer = AutoTokenizer.from_pretrained(args.model_id)

    # Configure the quantization algorithm and scheme
    recipe = QuantizationModifier(
        targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
    )

    # Apply quantization
    oneshot(model=model, recipe=recipe)

    save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8-dynamic")
    os.makedirs(save_path, exist_ok=True)

    # Save to disk in compressed-tensors format
    model.save_pretrained(save_path)
    tokenizer.save_pretrained(save_path)
    print(f"Model and tokenizer saved to: {save_path}")

if __name__ == "__main__":
    main()

Evaluation

The model was evaluated on OpenLLM Leaderboard V1, OpenLLM Leaderboard V2 and on HumanEval, using the following commands:

Evaluation Commands

OpenLLM Leaderboard V1:

lm_eval \
  --model vllm \
  --model_args pretrained="neuralmagic/granite-3.1-8b-base-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
  --tasks openllm \
  --write_out \
  --batch_size auto \
  --output_path output_dir \
  --show_config

HumanEval

Generation
python3 codegen/generate.py \
  --model neuralmagic/granite-3.1-8b-base-FP8-dynamic \
  --bs 16 \
  --temperature 0.2 \
  --n_samples 50 \
  --root "." \
  --dataset humaneval
Sanitization
python3 evalplus/sanitize.py \
  humaneval/neuralmagic--granite-3.1-8b-base-FP8-dynamic_vllm_temp_0.2
Evaluation
evalplus.evaluate \
  --dataset humaneval \
  --samples humaneval/neuralmagic--granite-3.1-8b-base-FP8-dynamic_vllm_temp_0.2-sanitized

Accuracy

Category Metric ibm-granite/granite-3.1-8b-base neuralmagic/granite-3.1-8b-base-FP8-dynamic Recovery (%)
OpenLLM V1 ARC-Challenge (Acc-Norm, 25-shot) 64.68 64.16 99.20
GSM8K (Strict-Match, 5-shot) 60.88 58.45 95.99
HellaSwag (Acc-Norm, 10-shot) 83.52 83.46 99.93
MMLU (Acc, 5-shot) 63.33 63.35 100.03
TruthfulQA (MC2, 0-shot) 51.33 51.56 100.45
Winogrande (Acc, 5-shot) 80.90 80.66 99.70
Average Score 67.44 66.94 99.26
Coding HumanEval Pass@1 44.10 44.80 101.59
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for neuralmagic/granite-3.1-8b-base-FP8-dynamic

Finetuned
(4)
this model

Collection including neuralmagic/granite-3.1-8b-base-FP8-dynamic