Qwen2-VL-72B-Instruct-GPTQ-Int8 / model-00016-of-00021.safetensors

Commit History

fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
d1eab90

可亲 commited on

Upload folder using huggingface_hub
8ca59d7
verified

clonefy commited on