Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ tags:
|
|
6 |
---
|
7 |
|
8 |
# vicuna-13b-4bit
|
9 |
-
Converted `vicuna-13b` to GPTQ 4bit using `true-sequentual
|
10 |
|
11 |
https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
12 |
|
@@ -19,19 +19,6 @@ If you're not familiar with the Git process
|
|
19 |
|
20 |
This creates and switches to a `cuda-stable` branch to continue using the quantized models.
|
21 |
|
22 |
-
Evals
|
23 |
-
-----
|
24 |
-
**vicuna-13b-4bit-128g.safetensors** []
|
25 |
-
|
26 |
-
**c4-new** -
|
27 |
-
0
|
28 |
-
|
29 |
-
**ptb-new** -
|
30 |
-
0
|
31 |
-
|
32 |
-
**wikitext2** -
|
33 |
-
0
|
34 |
-
|
35 |
# Usage
|
36 |
1. Run manually through GPTQ
|
37 |
2. (More setup but better UI) - Use the [text-generation-webui](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode). Make sure to follow the installation steps first [here](https://github.com/oobabooga/text-generation-webui#installation) before adding GPTQ support.
|
|
|
6 |
---
|
7 |
|
8 |
# vicuna-13b-4bit
|
9 |
+
Converted `vicuna-13b` to GPTQ 4bit using `true-sequentual` and `groupsize 128` for best possible model performance.
|
10 |
|
11 |
https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
12 |
|
|
|
19 |
|
20 |
This creates and switches to a `cuda-stable` branch to continue using the quantized models.
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
# Usage
|
23 |
1. Run manually through GPTQ
|
24 |
2. (More setup but better UI) - Use the [text-generation-webui](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode). Make sure to follow the installation steps first [here](https://github.com/oobabooga/text-generation-webui#installation) before adding GPTQ support.
|