|
--- |
|
library_name: transformers |
|
tags: |
|
- llama |
|
- facebook |
|
- meta |
|
- llama-2 |
|
- conversational |
|
- text-generation-inference |
|
--- |
|
|
|
An official quantization of [meta-llama/Llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b) using [PV-Tuning](https://arxiv.org/abs/2405.14852) on top of [AQLM](https://arxiv.org/abs/2401.06118). |
|
|
|
|
|
For this quantization, we used 1 codebook of 16 bits for groups of 8 weights. |
|
|
|
|
|
| Model | AQLM scheme | WikiText 2 PPL | Model size, Gb | Hub link | |
|
|------------|-------------|----------------|----------------|--------------------------------------------------------------------------| |
|
| Llama-2-7b | 1x16 | 5.68 | 2.4 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-1x16-hf) | |
|
| Llama-2-7b | 2x8 | 5.90 | 2.2 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-7b-AQLM-PV-2Bit-2x8-hf) | |
|
| Llama-2-7b | 1x16g16 | 9.21 | 1.7 | [Link](https://huggingface.co/justheuristic/Llama-2-7b-AQLM-PV-1Bit-1x16-hf) | |
|
| Llama-2-13b (this) | 1x16 | 5.05 | 4.1 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-13b-AQLM-PV-2Bit-1x16-hf)| |
|
| Llama-2-70b| 1x16 | 3.78 | 18.8 | [Link](https://huggingface.co/ISTA-DASLab/Llama-2-70b-AQLM-PV-2Bit-1x16-hf)| |
|
|
|
The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels. |
|
|
|
To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM). |
|
The original code for PV-Tuning can be found in the [AQLM@pv-tuning](https://github.com/Vahe1994/AQLM/tree/pv-tuning) branch. |
|
|
|
|