An official quantization of meta-llama/Llama-2-70b using PV-Tuning on top of AQLM.

For this quantization, we used 1 codebook of 16 bits for groups of 8 weights.

Model AQLM scheme WikiText 2 PPL Model size, Gb Hub link
Llama-2-7b 1x16 5.68 2.4 Link
Llama-2-7b 2x8 5.90 2.2 Link
Llama-2-7b 1x16g16 9.21 1.7 Link
Llama-2-13b 1x16 5.05 4.1 Link
Llama-2-70b (this) 1x16 3.78 18.8 Link

The 1x16g16 (1-bit) models are on the way, as soon as we update the inference lib with their respective kernels.

To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the official GitHub repo. The original code for PV-Tuning can be found in the AQLM@pv-tuning branch.

Downloads last month
14
Safetensors
Model size
9.38B params
Tensor type
FP16
·
I16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including ISTA-DASLab/Llama-2-70b-AQLM-PV-2Bit-1x16-hf