just saying it breaks whenever i get individual models to load for oobabooga

#5
by hellothereeeee - opened

downloading al the models worked but that was 50 gigabytes... then installing one model broke oobabooga, so i just gave up and used legacy. can you fix the problem? i have the bug report here https://github.com/oobabooga/text-generation-webui/issues/1495

p.s realized this wasnt the ggml version, pretend it is the ggml version.

Assuming you're the author of the issue, you're trying to load the ggml 13b q4_2 into oobabooga. That's a different model from what's on this repository.

Anyways, the problem is in the fact that oobabooga doesn't use the latest llama.cpp bindings. You will have to update those manually or wait until oobabooga ships an update.

Overall, my ggml conversions use bleeding edge quantization that absolutely requires the latest version of llama.cpp to work.

i see, i'll wait for a new llama.cpp update. thanks!

hellothereeeee changed discussion status to closed

Sign up or log in to comment