Edit model card

AWQ-quantized package (W4G128) of google/gemma-2-9b. Support for Gemma2 in the codebase of AutoAWQ is proposed in the following pull request. To use the model, follow the AutoAWQ examples with the source from #562.

Evaluation
WikiText-2 PPL: 7.08
C4 PPL: 11.05

Loading

model_path = "radi-cho/gemma-2-9b-AWQ"

# With transformers
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="cuda:0")

# With transformers (fused)
from transformers import AutoModelForCausalLM, AwqConfig
quantization_config = AwqConfig(bits=4, fuse_max_seq_len=512, do_fuse=True)
model = AutoModelForCausalLM.from_pretrained(model_path, quantization_config=quantization_config).to(0)

# With AutoAWQ
from awq import AutoAWQForCausalLM
model = AutoAWQForCausalLM.from_quantized(model_path)
Downloads last month
21
Safetensors
Model size
2.95B params
Tensor type
I32
BF16
FP16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.