justheuristic commited on
Commit
4bd28af
1 Parent(s): 8315dca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -17,10 +17,10 @@ For this quantization, we used 1 codebook of 16 bits for groups of 8 weights.
17
 
18
  | Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
19
  |------------|-------------|----------------|----------------|--------------------------------------------------------------------------|
20
- | Llama-2-7b (this) | 1x16g8 | 5.68 | 2.4 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-1x16-hf) |
21
- | Llama-2-7b | 2x8g8 | 5.90 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-2x8-hf) |
22
- | Llama-2-13b| 1x16g8 | 5.05 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-13b-AQLM-PV-2Bit-1x16-hf)|
23
- | Llama-2-70b| 1x16g8 | 3.78 | 18.8 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-PV-2Bit-1x16-hf)|
24
 
25
  The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
26
 
 
17
 
18
  | Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link |
19
  |------------|-------------|----------------|----------------|--------------------------------------------------------------------------|
20
+ | Llama-2-7b (this) | 1x16 | 5.68 | 2.4 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-1x16-hf) |
21
+ | Llama-2-7b | 2x8 | 5.90 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-2x8-hf) |
22
+ | Llama-2-13b| 1x16 | 5.05 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-13b-AQLM-PV-2Bit-1x16-hf)|
23
+ | Llama-2-70b| 1x16 | 3.78 | 18.8 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-PV-2Bit-1x16-hf)|
24
 
25
  The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.
26