AlexWortega's picture
Upload README.md with huggingface_hub
a001a21 verified
|
raw
history blame
1.76 kB

AQLM quantization of 152334H/miqu-1-70b-sf.

For this quantization, we used 1 codebook of 16 bits.

Selected evaluation results for this and other models:

Model AQLM scheme WikiText 2 PPL Model size, Gb Hub link
Llama-2-7b 1x16 6.31 2.4 Link
Llama-2-7b 2x8 7.98 2.2 Link
Llama-2-7b 8x8 7.83 2.2 Link
Llama-2-13b 1x16 5.41 4.1 Link
Llama-2-70b 1x16 3.96 18.8 Link
Llama-2-70b 2x8 4.83 18.2 Link
Mixtral-8x7b 1x16 4.37 12.6 Link
miqu-1-70b (THIS) 1x16 4.01 18.8 Link

To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the official GitHub repo.