3x7b MoE Models in GGUF
Hi mradermacher,
thanks for all your great work I really appreciate your quantization effort. I noticed that koboldccp 1.61.2 is not capable of running GGUF Versions of MoE models specifically with 3x7b. I tried more than 5 models in 3x7b GGUF and koboldccp just shuts down after trying to load. 2x7b, 4x7b or 8x7b seem to be fine even 2x13b 2x10.7b etc can be loaded and I have 64gb RAM so it's not an issue of size. Did you notice that as well?
thanks in advance
No, I'll have a look, though.
Just tried out the Q4_K_S variant, and it works fine in both llama and koboldcpp, so it doesn't seem to be an issue with the model or koboldcpp, but probably a settings issue. It would probably help to know what "shutting down" means - do you get an error? Which one?