Update README.md
Browse files
README.md
CHANGED
@@ -15,16 +15,17 @@ quantized_by: Thireus
|
|
15 |
|
16 |
## Models available in this repository
|
17 |
|
18 |
-
| Link | BITS (-b) | HEAD BITS (-hb) | MEASU-REMENT LENGTH (-ml) | LENGTH (-l) | CAL DATASET (-c) | Size | V. | Max Context Length | Layers | VRAM Min | VRAM Max |
|
19 |
-
| ------ | --------- | --------------- | ------------------------ | ----------- | ---------------- | ---- | ------- | ------------------ | ---- |------------------ | ------------------ |
|
20 |
-
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.1](https://github.com/turboderp/exllamav2/commit/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | 79 | 40GB | 44GB |
|
21 |
-
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.1](https://github.com/turboderp/exllamav2/commit/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | 79 | 48GB | 52GB |
|
22 |
-
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-
|
23 |
|
24 |
\* wikitext-2-raw-v1
|
25 |
|
26 |
-
|
27 |
|
|
|
28 |
_This repository contains EXL2 model files for [WizardLM's WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)._
|
29 |
|
30 |
EXL2 is a new format used by ExLlamaV2 – https://github.com/turboderp/exllamav2. EXL2 is based on the same optimization method as GPTQ. The format allows for mixing quantization
|
|
|
15 |
|
16 |
## Models available in this repository
|
17 |
|
18 |
+
| Link | BITS (-b) | HEAD BITS (-hb) | MEASU-REMENT LENGTH (-ml) | LENGTH (-l) | CAL DATASET (-c) | Size | V. | Max Context Length | Layers | VRAM Min | VRAM Max | PPL**
|
19 |
+
| ------ | --------- | --------------- | ------------------------ | ----------- | ---------------- | ---- | ------- | ------------------ | ---- |------------------ | ------------------ | ------------------ |
|
20 |
+
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.1](https://github.com/turboderp/exllamav2/commit/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | 79 | 40GB | 44GB | 4.1640625 |
|
21 |
+
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.1](https://github.com/turboderp/exllamav2/commit/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | 79 | 48GB | 52GB | 4.0625 |
|
22 |
+
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-6.0bpw-h6-exl2/) | 6.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 49GB | [0.0.2](https://github.com/turboderp/exllamav2/commit/fae6fb296c6db4e3b1314c49c030541bed98acb9) | 4096 | 79 | 56GB | 60GB | 4.0703125 |
|
23 |
|
24 |
\* wikitext-2-raw-v1
|
25 |
|
26 |
+
\*\* evaluated with text-generation-webui ExLlama v0.0.2 on wikitext-2-raw-v1 (stride 512 and max_length 4096)
|
27 |
|
28 |
+
## Description:
|
29 |
_This repository contains EXL2 model files for [WizardLM's WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)._
|
30 |
|
31 |
EXL2 is a new format used by ExLlamaV2 – https://github.com/turboderp/exllamav2. EXL2 is based on the same optimization method as GPTQ. The format allows for mixing quantization
|