Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
inference: false
|
3 |
+
license: llama2
|
4 |
+
model_creator: WizardLM
|
5 |
+
model_link: https://huggingface.co/WizardLM/WizardLM-70B-V1.0
|
6 |
+
model_name: WizardLM 70B V1.0
|
7 |
+
model_type: llama
|
8 |
+
quantized_by: Thireus
|
9 |
+
---
|
10 |
+
|
11 |
+
# WizardLM 70B V1.0 β EXL2
|
12 |
+
- Model creator: [WizardLM](https://huggingface.co/WizardLM)
|
13 |
+
- FP32 Original model used for quantization: [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) β float32
|
14 |
+
- FP16 Model used for quantization: [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) β float16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
|
15 |
+
- BF16 Model used for quantization: [WizardLM 70B V1.0-BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) β bfloat16 of [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
|
16 |
+
|
17 |
+
## Models available:
|
18 |
+
|
19 |
+
| Link | BITS (-b) | HEAD BITS (-hb) | MEASU-REMENT LENGTH (-ml) | LENGTH (-l) | CAL DATASET (-c) | Size | V. | Max Context Length | Base Model | Layers | VRAM Min | VRAM Max | PPL** | Comments                                                                                                                         |
|
20 |
+
| ------ | --------- | --------------- | ------------------------ | ----------- | ---------------- | ---- | ------- | ------------------ | ---- | ---- |------------------ | ------------------ | ------------------ | ---------------------------------------------------------------------------------- |
|
21 |
+
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-FP32-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 33GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/c0dd3412d59c0bc776264512bf76264e954c221d) | 4096 | [FP32](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | 80 | 39GB | 44GB | 4.15234375 | Good results | | [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.1](https://github.com/turboderp/exllamav2/tree/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 40GB | 44GB | 4.1640625 | Model suffers from poor prompt understanding and logic is affected |
|
22 |
+
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16-4.0bpw-h6-exl2/) | 4.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 33GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/ec5164b8a8e282b91aedb2af94dfeb89887656b7) | 4096 | [BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) | 80 | 39GB | 44GB | 4.2421875 | Model suffers from poor prompt understanding and logic is affected |
|
23 |
+
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-4.0bpw-h8-exl2/) | 4.0 | 8 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 35GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/a4f2663e310919f007c593030d56ca110f99c261) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 39GB | 44GB | 4.24609375 | Model suffers from poor prompt understanding and logic is affected |
|
24 |
+
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-FP32-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 41GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/c0dd3412d59c0bc776264512bf76264e954c221d) | 4096 | [FP32](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | 80 | 47GB | 52GB | 4.06640625 | Best so far. Good results |
|
25 |
+
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h8-exl2/) | 5.0 | 8 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/a4f2663e310919f007c593030d56ca110f99c261) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 48GB | 52GB | 4.09765625 | Model suffers from poor prompt understanding and logic is affected |
|
26 |
+
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 44GB | [0.0.1](https://github.com/turboderp/exllamav2/tree/aee7a281708d5faff2ad0ea4b3a3a4b754f458f3) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 48GB | 52GB | 4.0625 | Model suffers from poor prompt understanding and logic is affected |
|
27 |
+
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16-5.0bpw-h6-exl2/) | 5.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 41GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/ec5164b8a8e282b91aedb2af94dfeb89887656b7) | 4096 | [BF16](https://huggingface.co/Thireus/WizardLM-70B-V1.0-BF16) | 80 | 47GB | 52GB | 4.09765625 | Model suffers from poor prompt understanding and logic is affected |
|
28 |
+
| [here](https://huggingface.co/Thireus/WizardLM-70B-V1.0-HF-6.0bpw-h6-exl2/) | 6.0 | 6 | 2048 | 2048 | [0000.parquet](https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-2-raw-v1/train)* | 49GB | [0.0.2](https://github.com/turboderp/exllamav2/tree/fae6fb296c6db4e3b1314c49c030541bed98acb9) | 4096 | [FP16](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF) | 80 | 56GB | 60GB | 4.0703125 | Model suffers from poor prompt understanding and logic is affected |
|
29 |
+
|
30 |
+
|
31 |
+
\* wikitext-2-raw-v1
|
32 |
+
|
33 |
+
\*\* Evaluated with text-generation-webui ExLlama v0.0.2 on wikitext-2-raw-v1 (stride 512 and max_length 0). For reference, [TheBloke_WizardLM-70B-V1.0-GPTQ_gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GPTQ/tree/gptq-4bit-32g-actorder_True) has a score of 4.1015625 in perplexity.
|
34 |
+
|
35 |
+
## Description:
|
36 |
+
|
37 |
+
_This repository contains EXL2 model files for [WizardLM's WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)._
|
38 |
+
|
39 |
+
EXL2 is a new format used by ExLlamaV2 β https://github.com/turboderp/exllamav2. EXL2 is based on the same optimization method as GPTQ. The format allows for mixing quantization
|
40 |
+
levels within a model to achieve any average bitrate between 2 and 8 bits per weight.
|
41 |
+
|
42 |
+
## Prompt template (official):
|
43 |
+
|
44 |
+
```
|
45 |
+
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
|
46 |
+
```
|
47 |
+
|
48 |
+
## Prompt template (suggested):
|
49 |
+
|
50 |
+
```
|
51 |
+
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
|
52 |
+
USER:
|
53 |
+
{prompt}
|
54 |
+
ASSISTANT:
|
55 |
+
|
56 |
+
|
57 |
+
```
|
58 |
+
|
59 |
+
## Quantization process:
|
60 |
+
|
61 |
+
| Original Model | β | (optional) float16 or bfloat16 Model* | β | Safetensors Model** | β | EXL2 Model |
|
62 |
+
| -------------- | --- | ------------- | --- | ---------------- | --- | ---------- |
|
63 |
+
| [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) | β | [WizardLM 70B V1.0-HF](https://huggingface.co/simsim314/WizardLM-70B-V1.0-HF)* | β | Safetensors** | β | EXL2 |
|
64 |
+
|
65 |
+
Example to convert WizardLM-70B-V1.0-HF to EXL2 4.0 bpw with 6-bit head:
|
66 |
+
|
67 |
+
```
|
68 |
+
mkdir -p ~/EXL2/WizardLM-70B-V1.0-HF_4bit # Create the output directory
|
69 |
+
python convert.py -i ~/float16_safetensored/WizardLM-70B-V1.0-HF -o ~/EXL2/WizardLM-70B-V1.0-HF_4bit -c ~/EXL2/0000.parquet -b 4.0 -hb 6
|
70 |
+
```
|
71 |
+
|
72 |
+
\* Use the following script to convert your local pytorch_model bin files to float16 (you can also choose bfloat16) + safetensors all in one go:
|
73 |
+
|
74 |
+
- https://github.com/oobabooga/text-generation-webui/blob/main/convert-to-safetensors.py
|
75 |
+
(best for sharding and float16/FP16 or bfloat16/BF16 conversion)
|
76 |
+
|
77 |
+
Example to convert [WizardLM 70B V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) directly to float16 safetensors in 10GB shards:
|
78 |
+
|
79 |
+
```
|
80 |
+
python convert-to-safetensors.py ~/original/WizardLM-70B-V1.0 --output ~/float16_safetensored/WizardLM-70B-V1.0 --max-shard-size 10GB
|
81 |
+
```
|
82 |
+
|
83 |
+
Use `--bf16` if you'd like to try bfloat16 instead, but note that there are concerns about quantization quality β https://github.com/turboderp/exllamav2/issues/30#issuecomment-1719009289
|
84 |
+
|
85 |
+
\*\* Use any one of the following scripts to convert your local pytorch_model bin files to safetensors:
|
86 |
+
|
87 |
+
- https://github.com/turboderp/exllamav2/blob/master/util/convert_safetensors.py (official ExLlamaV2)
|
88 |
+
- https://huggingface.co/Panchovix/airoboros-l2-70b-gpt4-1.4.1-safetensors/blob/main/bin2safetensors/convert.py (recommended)
|
89 |
+
- https://gist.github.com/epicfilemcnulty/1f55fd96b08f8d4d6693293e37b4c55e#file-2safetensors-py
|
90 |
+
|
91 |
+
## Further reading:
|
92 |
+
|
93 |
+
- https://mlabonne.github.io/blog/posts/Introduction_to_Weight_Quantization.html
|