SpiridonSunRotator's picture
Update README.md
ffdffd6 verified
|
raw
history blame
825 Bytes
metadata
library_name: transformers
tags:
  - llama
  - facebook
  - meta
  - llama-3
  - conversational
  - text-generation-inference

Official AQLM quantization of meta-llama/Meta-Llama-3-8B-Instruct .

For this quantization, we used 1 codebook of 16 bits.

Results:

Model Quantization MMLU (5-shot) GSM8k (8-shot) ArcC ArcE Hellaswag Winogrande PiQA Model size, Gb
meta-llama/Meta-Llama-3-8B-Instruct None 0.6560 0.7475 0.5299 0.8165 0.5771 0.7867 0.7206 16.1
1x16 0.5872 0.5087 0.4590 0.7710 0.5491 0.7726 0.6953 4.1

UPD 02.05.2024

The version of model with improved fine-tuning procedure.