Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: other
|
4 |
+
commercial: no
|
5 |
+
inference: false
|
6 |
+
---
|
7 |
+
# pygmalion-13b-4bit-128g
|
8 |
+
## Model description
|
9 |
+
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
|
10 |
+
|
11 |
+
Quantized from the decoded pygmalion-13b xor format.
|
12 |
+
**https://huggingface.co/PygmalionAI/pygmalion-13b**
|
13 |
+
|
14 |
+
In safetensor format.
|
15 |
+
|
16 |
+
### Quantization Information
|
17 |
+
GPTQ CUDA quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa
|
18 |
+
```
|
19 |
+
python llama.py --wbits 4 models/pygmalion-13b c4 --true-sequential --groupsize 128 --save_safetensors models/pygmalion-13b/4bit-128g.safetensors
|
20 |
+
```
|