Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Custom GGUF Quants with iMatrix for : https://huggingface.co/MarsupialAI/LaDameBlanche-v2-95b

(Yes, I'm lazy, but I can live with a 0.01ppl bump ^^)

The model is a great merge, sensical and creative, imho working better for lesser requirements than the 100b+ Miqu which are worthy only for those with 48GB VRAM or more.

In IQ2_LR(2.7BPW, for 8k context with 36GB VRAM and an IGP running the OS display), ARC Challenge at 57, ARC Easy at 77, PPL 512 at 4.5860.

Mesdames et messieurs, vous êtes servis!

Downloads last month
17
GGUF
Model size
94.6B params
Architecture
llama
Unable to determine this model's library. Check the docs .

Collection including Nexesenex/LaDameBlanche-v2-95b-iMat-CQ.GGUF