feihu.hf
commited on
Commit
•
56bd421
1
Parent(s):
934e649
update readme
Browse files
README.md
CHANGED
@@ -75,7 +75,7 @@ generated_ids = [
|
|
75 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
76 |
```
|
77 |
|
78 |
-
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-1.8B-Chat-GPTQ-Int8`, `Qwen1.5-1.8B-Chat-AWQ`, and `Qwen1.5-1.8B-Chat-GGUF`.
|
79 |
|
80 |
|
81 |
## Tips
|
|
|
75 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
76 |
```
|
77 |
|
78 |
+
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-1.8B-Chat-GPTQ-Int4`, `Qwen1.5-1.8B-Chat-GPTQ-Int8`, `Qwen1.5-1.8B-Chat-AWQ`, and `Qwen1.5-1.8B-Chat-GGUF`.
|
79 |
|
80 |
|
81 |
## Tips
|