Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of [google/gemma-2b](https://huggingface.co/google/gemma-2b). For this quantization, we used 1 codebook of 16 bits. Results: | Model | AQLM scheme | WinoGrande | PiQA | HellaSwag | ArcE | ArcC | Model size, Gb | |------|------|------|-------|-------|-------|------|------| | gemma-2b |1x16| 0.6275 | 0.7318 | 0.4582 | 0.6923 | 0.3259| 1.7 | To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).