Upload README.md
Browse files
README.md
CHANGED
@@ -72,7 +72,7 @@ Multiple quantisation parameters are provided, to allow you to choose the best o
|
|
72 |
|
73 |
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
|
74 |
|
75 |
-
All GPTQ files are made with AutoGPTQ.
|
76 |
|
77 |
<details>
|
78 |
<summary>Explanation of GPTQ parameters</summary>
|
@@ -89,10 +89,10 @@ All GPTQ files are made with AutoGPTQ.
|
|
89 |
|
90 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
91 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
92 |
-
| [main](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ/tree/main) | 4 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
93 |
-
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
94 |
-
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
95 |
-
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) |
|
96 |
|
97 |
<!-- README_GPTQ.md-provided-files end -->
|
98 |
|
|
|
72 |
|
73 |
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
|
74 |
|
75 |
+
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
|
76 |
|
77 |
<details>
|
78 |
<summary>Explanation of GPTQ parameters</summary>
|
|
|
89 |
|
90 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
91 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
92 |
+
| [main](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ/tree/main) | 4 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
93 |
+
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
94 |
+
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
95 |
+
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
96 |
|
97 |
<!-- README_GPTQ.md-provided-files end -->
|
98 |
|