justheuristic
commited on
Commit
•
4bd28af
1
Parent(s):
8315dca
Update README.md
Browse files
README.md
CHANGED
@@ -17,10 +17,10 @@ For this quantization, we used 1 codebook of 16 bits for groups of 8 weights.
|
|
17 |
|
18 |
| Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
|
19 |
|------------|-------------|----------------|----------------|--------------------------------------------------------------------------|
|
20 |
-
| Llama-2-7b (this) |
|
21 |
-
| Llama-2-7b |
|
22 |
-
| Llama-2-13b|
|
23 |
-
| Llama-2-70b|
|
24 |
|
25 |
The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
|
26 |
|
|
|
17 |
|
18 |
| Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
|
19 |
|------------|-------------|----------------|----------------|--------------------------------------------------------------------------|
|
20 |
+
| Llama-2-7b (this) | 1x16 | 5.68 | 2.4 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-1x16-hf) |
|
21 |
+
| Llama-2-7b | 2x8 | 5.90 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-2x8-hf) |
|
22 |
+
| Llama-2-13b| 1x16 | 5.05 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-13b-AQLM-PV-2Bit-1x16-hf)|
|
23 |
+
| Llama-2-70b| 1x16 | 3.78 | 18.8 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-PV-2Bit-1x16-hf)|
|
24 |
|
25 |
The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
|
26 |
|