Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
This is the quantized version 4-bits created using autotrain, but it doesn't work.
|
2 |
+
|
3 |
+
## Error
|
4 |
+
### GPU
|
5 |
+
|
6 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62de65017e93762b858d3057/M0OoBfV1WC1QcLumyvy0L.png)
|
7 |
+
|
8 |
+
### CPU
|
9 |
+
|
10 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/62de65017e93762b858d3057/ezLq3jhasIg--M-jJAMSI.png)
|
11 |
+
|
12 |
+
|
13 |
+
## Quantization Process
|
14 |
+
|
15 |
+
```py
|
16 |
+
!pip install auto-gptq
|
17 |
+
!pip install git+https://github.com/huggingface/optimum.git
|
18 |
+
!pip install git+https://github.com/huggingface/transformers.git
|
19 |
+
!pip install --upgrade accelerate
|
20 |
+
```
|
21 |
+
|
22 |
+
```py
|
23 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer,GPTQConfig
|
24 |
+
tokenizer = AutoTokenizer.from_pretrained("inception-mbzuai/jais-13b-chat")
|
25 |
+
gptq_config = GPTQConfig(bits=4, dataset = "c4", tokenizer=tokenizer)
|
26 |
+
model = AutoModelForCausalLM.from_pretrained('inception-mbzuai/jais-13b-chat', quantization_config=gptq_config,trust_remote_code=True)
|
27 |
+
```
|
28 |
+
|