Add 8.0 link
Browse files
README.md
CHANGED
@@ -20,6 +20,8 @@ Conversion was done using VMWareOpenInstruct.parquet as calibration dataset.
|
|
20 |
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
|
21 |
|
22 |
Original model: https://huggingface.co/Arc53/docsgpt-7b-mistral
|
|
|
|
|
23 |
|
24 |
## Download instructions
|
25 |
|
|
|
20 |
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
|
21 |
|
22 |
Original model: https://huggingface.co/Arc53/docsgpt-7b-mistral
|
23 |
+
|
24 |
+
<a href="https://huggingface.co/bartowski/docsgpt-7b-mistral-exl2/tree/8_0">8.0 bits per weight</a>
|
25 |
|
26 |
## Download instructions
|
27 |
|