Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of `mistralai/Mixtral-8x7B-Instruct-v0.1`. For this quantization, we used 1 codebook of 16 bits. Selected evaluation results for this model: | Model | AQLM scheme | WinoGrande | PiQA | HellaSwag | ArcE | ArcC | Model size, Gb | Hub link | |------|------|------|-------|-------|-------|------|------|------| | Mixtral-8x7B-Instruct-v0.1 (THIS)| 1x16 | 0.7593 |0.8043 | 0.6179 | 0.7768 | 0.4793 | 12.6 | [Link](https://huggingface.co/BlackSamorez/Mixtral-8x7b-AQLM-2Bit-1x16-hf)| To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).