Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -37,7 +37,7 @@ Below are the perplexity scores for the EXL2 models. A lower score is better.
|
|
37 |
|
38 |
## EQ Bench
|
39 |
|
40 |
-
Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML,
|
41 |
|
42 |
| Quant Size | Instruct Template | Score |
|
43 |
|------------|-------------------|-------|
|
@@ -111,7 +111,8 @@ do
|
|
111 |
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
|
112 |
echo "| $BIT_PRECISION | $score |"
|
113 |
fi
|
114 |
-
done
|
|
|
115 |
|
116 |
|
117 |
## Quant Details
|
|
|
37 |
|
38 |
## EQ Bench
|
39 |
|
40 |
+
Here are the EQ Bench scores for the EXL2 quants using Alpaca, ChatML, Mistral, Vicuna-v1.1 and Vicuna-v0 prompt templates. A higher score is better.
|
41 |
|
42 |
| Quant Size | Instruct Template | Score |
|
43 |
|------------|-------------------|-------|
|
|
|
111 |
score=$(echo "$output" | grep -oP 'Evaluation perplexity: \K[\d.]+')
|
112 |
echo "| $BIT_PRECISION | $score |"
|
113 |
fi
|
114 |
+
done
|
115 |
+
```
|
116 |
|
117 |
|
118 |
## Quant Details
|