Update README.md
Browse files
README.md
CHANGED
@@ -10,23 +10,11 @@ language:
|
|
10 |
## Model Details
|
11 |
|
12 |
This model is an int4 model with group_size 128 of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) generated by [intel/auto-round](https://github.com/intel/auto-round).
|
|
|
13 |
|
14 |
|
15 |
|
16 |
-
### INT4 Inference with AutoGPTQ's Kernel
|
17 |
|
18 |
-
```python
|
19 |
-
##pip install auto-gptq[triton]
|
20 |
-
##pip install triton==2.2.0
|
21 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
22 |
-
quantized_model_dir = "Intel/Phi-3-mini-128k-instruct-int4-inc"
|
23 |
-
model = AutoModelForCausalLM.from_pretrained(quantized_model_dir,
|
24 |
-
device_map="auto",
|
25 |
-
trust_remote_code=False,
|
26 |
-
)
|
27 |
-
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, use_fast=True)
|
28 |
-
print(tokenizer.decode(model.generate(**tokenizer("There is a girl who likes adventure,", return_tensors="pt").to(model.device),max_new_tokens=50)[0]))
|
29 |
-
```
|
30 |
|
31 |
|
32 |
|
|
|
10 |
## Model Details
|
11 |
|
12 |
This model is an int4 model with group_size 128 of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) generated by [intel/auto-round](https://github.com/intel/auto-round).
|
13 |
+
Inference of this model is compatible with AutoGPTQ's Kernel.
|
14 |
|
15 |
|
16 |
|
|
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
|
20 |
|