Update README.md
Browse files
README.md
CHANGED
@@ -12,14 +12,11 @@ This model outperforms an fp16 LLama-2-13B (perplexity 4.13 vs. 4.63) for a comp
|
|
12 |
|
13 |
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
|
14 |
``` Python
|
15 |
-
from hqq.models.llama_hf import LlamaHQQ
|
16 |
-
import transformers
|
17 |
-
|
18 |
model_id = 'mobiuslabsgmbh/Llama-2-70b-hf-2bit_g16_s128-HQQ'
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
model
|
23 |
```
|
24 |
|
25 |
*Limitations*: <br>
|
|
|
12 |
|
13 |
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
|
14 |
``` Python
|
|
|
|
|
|
|
15 |
model_id = 'mobiuslabsgmbh/Llama-2-70b-hf-2bit_g16_s128-HQQ'
|
16 |
+
|
17 |
+
from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
|
18 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
19 |
+
model = HQQModelForCausalLM.from_quantized(model_id)
|
20 |
```
|
21 |
|
22 |
*Limitations*: <br>
|