https://huggingface.co/rhplus0831/maid-yuzu-v7

#444
by yttria - opened

New llama.cpp broke old 8x7B quants

Do you have a pointer what specifically you refer to? In any case, I can redo the quants and also add imatrix quants for such models - and I have now queued this model for both.

Thanks! So they actually found out recently - I already redid "most" 8x7b mixtral models a few months ago, and mixtral models are a bitch to quant (they often blow up to twice the size for reasons), and it seems thankfully those are not effected.

If you find other affected models (I wasn't even aware this one is a moe), just drop me a note and I can redo them as well.

mradermacher changed discussion status to closed

Sign up or log in to comment