mobicham's picture
Update README.md
70eff32
|
raw
history blame
833 Bytes
metadata
license: llama2
train: false
inference: false
pipeline_tag: text-generation

Llama-2-7b-hf-4bit_g64-HQQ

This a version of the LLama2-7B model quantized to 4-bit via Half-Quadratic Quantization (HQQ): https://mobiusml.github.io/hqq/

To run the model, install the HQQ library from https://github.com/mobiusml/hqq/tree/main/code and load it as follows:

from hqq.models.llama import LlamaHQQ
import transformers

model_id = 'mobiuslabsgmbh/Llama-2-7b-hf-4bit_g64-HQQ'
#Load the tokenizer
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)
#Load the model
model = LlamaHQQ.from_quantized(model_id)

You can then use the model for text generation or to reproduce the benchmark numbers.

Limitations:
-Only supports a single GPU runtime.
-Not compatible with HuggingFace's PEFT.