Llava on llama.cpp?

#30
by danielelongo - opened

Hello, is there a quantized version of llava that can run locally on a CPU with llama.cpp?

Llava Hugging Face org

@danielelongo Hey! we don't have gguf converted weights in the repo currently, but I saw some for llava-1.6 in https://huggingface.co/cjpais. Let me know if that help but in any case, the conversion script is available in llama.cpp repo if you wanna convert youself :)

danielelongo changed discussion status to closed

Sign up or log in to comment