Mixtral Erotic 13Bx2 MOE 22B - GGUF

Quantized GGUF version of the model Mixtral Erotic 13Bx2 MOE 22B, using Llama.Cpp Convert.py

Downloads last month
97
GGUF
Model size
21.5B params
Architecture
llama

6-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.