File size: 3,522 Bytes
5105575 46cf720 5105575 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 |
---
language:
- en
license: apache-2.0
base_model: microsoft/WizardLM-2-8x22B
tags:
- exl2
---
# WizardLM-2-8x22B - EXL2 5.5bpw
This is a 5.5bpw EXL2 quant of [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B)
Details about the model can be found at the above model page.
## EXL2 Version
These quants were made with exllamav2 version 0.0.18. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
## Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
| Quant Level | Perplexity Score |
|-------------|------------------|
| 7.0 | 4.5859 |
| 6.0 | 4.6252 |
| 5.5 | 4.6493 |
| 5.0 | 4.6937 |
| 4.5 | 4.8029 |
| 4.0 | 4.9372 |
| 3.5 | 5.1336 |
| 3.25 | 5.3636 |
| 3.0 | 5.5468 |
| 2.75 | 5.8255 |
| 2.5 | 6.3362 |
| 2.25 | 7.7763 |
### Perplexity Script
This was the script used for perplexity testing.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
DATA_SET=/root/wikitext/wikitext-2-v1.parquet
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
BIT_PRECISIONS=(6.0 5.5 5.0 4.5 4.0 3.5 3.25 3.0 2.75 2.5 2.25)
# Print the markdown table header
echo "| Quant Level | Perplexity Score |"
echo "|-------------|------------------|"
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
LOCAL_FOLDER="/root/models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
REMOTE_FOLDER="Dracones/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
if [ ! -d "$LOCAL_FOLDER" ]; then
huggingface-cli download --local-dir-use-symlinks=False --local-dir "${LOCAL_FOLDER}" "${REMOTE_FOLDER}" >> /root/download.log 2>&1
fi
output=$(python test_inference.py -m "$LOCAL_FOLDER" -gs 40,40,40,40 -ed "$DATA_SET")
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
echo "| $BIT_PRECISION | $score |"
# rm -rf "${LOCAL_FOLDER}"
done
```
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Set the model name and bit size
MODEL_NAME="WizardLM-2-8x22B"
# Define variables
MODEL_DIR="/mnt/storage/models/$MODEL_NAME"
OUTPUT_DIR="exl2_$MODEL_NAME"
MEASUREMENT_FILE="measurements/$MODEL_NAME.json"
# Create the measurement file if needed
if [ ! -f "$MEASUREMENT_FILE" ]; then
echo "Creating $MEASUREMENT_FILE"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
fi
# Choose one of the below. Either create a single quant for testing or a batch of them.
# BIT_PRECISIONS=(2.25)
BIT_PRECISIONS=(5.0 4.5 4.0 3.5 3.0 2.75 2.5 2.25)
for BIT_PRECISION in "${BIT_PRECISIONS[@]}"
do
CONVERTED_FOLDER="models/${MODEL_NAME}_exl2_${BIT_PRECISION}bpw"
# If it doesn't already exist, make the quant
if [ ! -d "$CONVERTED_FOLDER" ]; then
echo "Creating $CONVERTED_FOLDER"
# Create directories
if [ -d "$OUTPUT_DIR" ]; then
rm -r "$OUTPUT_DIR"
fi
mkdir "$OUTPUT_DIR"
mkdir "$CONVERTED_FOLDER"
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
fi
done
```
|