justheuristic commited on
Commit
4674111
1 Parent(s): 518c749

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Official quantization of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using [PV-Tuning](https://arxiv.org/abs/2405.14852) on top of [AQLM](https://arxiv.org/abs/2401.06118).
2
+
3
+ For this quantization, we used 1 codebook of 16 bits for groups of 8 weights.
4
+
5
+ Results (0-shot `acc`):
6
+ | Model | Quantization | WinoGrande | PiQA | HellaSwag | ArcE | ArcC | Model size, Gb |
7
+ |------|------|------|-------|-------|-------|------|------|
8
+ |Mistral-7B-v0.1| None | 0.7364 | 0.8047 | 0.6115 | 0.7887 | 0.4923 | 14.5 |
9
+ | |1x16 (this) | 0.7151 | 0.7976 | 0.5880 | 0.7698 | 0.4514 | 2.51 |
10
+
11
+ The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
12
+
13
+ To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
14
+ The original code for PV-Tuning can be found in the [AQLM@pv-tuning](https://github.com/Vahe1994/AQLM/tree/pv-tuning) branch.