File size: 823 Bytes
991f032
 
453e34d
 
 
991f032
453e34d
 
1f1ab9d
453e34d
c74e064
453e34d
1f1ab9d
453e34d
 
5959afe
 
 
 
453e34d
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
license: llama2
train: false
inference: false
pipeline_tag: text-generation
---

## Llama-2-70b-hf-2bit_g16_s128-HQQ 
This is a version of the LLama-2-70B-hf model quantized to 2-bit via Half-Quadratic Quantization (HQQ): https://mobiusml.github.io/hqq_blog/

This model outperforms an fp16 LLama-2-13B (perplexity 4.13 vs. 4.63) for a comparable ~26GB size. 

To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
``` Python
model_id = 'mobiuslabsgmbh/Llama-2-70b-hf-2bit_g16_s128-HQQ'

from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model     = HQQModelForCausalLM.from_quantized(model_id)
```

*Limitations*: <br>
-Only supports single GPU runtime.<br>
-Not compatible with HuggingFace's PEFT.<br>