mobicham commited on
Commit
c9721b8
1 Parent(s): da2fdae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -6,9 +6,9 @@ pipeline_tag: text-generation
6
  ---
7
 
8
  ## Llama-2-7b-hf-4bit_g64-HQQ
9
- This is a version of the LLama-2-7B-hf model quantized to 4-bit via Half-Quadratic Quantization (HQQ): https://mobiusml.github.io/hqq/
10
 
11
- To run the model, install the HQQ library from https://github.com/mobiusml/hqq/tree/main/code and use it as follows:
12
  ``` Python
13
  from hqq.models.llama import LlamaHQQ
14
  import transformers
 
6
  ---
7
 
8
  ## Llama-2-7b-hf-4bit_g64-HQQ
9
+ This is a version of the LLama-2-7B-hf model quantized to 4-bit via Half-Quadratic Quantization (HQQ): https://mobiusml.github.io/hqq_blog/
10
 
11
+ To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
12
  ``` Python
13
  from hqq.models.llama import LlamaHQQ
14
  import transformers