Struggling to use this with vLLM

#1
by Kayvane - opened

Looking through the code it looks like it's looking for a specific config file name but now it's merged into config.json

https://github.com/vllm-project/vllm/blob/bb2fc08072db2d96e547407b4301fb6ba141d9d6/vllm/model_executor/layers/quantization/awq.py#L54

You can try to use vllm==0.5.3 and torch==2.3.1

Sign up or log in to comment