llama2-7b-base-4bit-AWQ / quant_config.json
TitanML Co
Upload folder using huggingface_hub
56ec68d verified
{
"zero_point": true,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}