Support for vllm/lmdeploy?

#1
by solankibhargav - opened

Thanks for building this model and sharing. Is there a support for faster inference engine? like lmdeploy/vllm? ( i tried both and currently it fails )

Llava Hugging Face org

Sign up or log in to comment