Exllama v2 Quantizations of mixtral-instruct-0.1-laser
Using turboderp's ExLlamaV2 v0.0.13 for quantization.
The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/cognitivecomputations/mixtral-instruct-0.1-laser
Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
---|---|---|---|---|---|---|
6_5 | 6.5 | 8.0 | 38.9 GB | 40.4 GB | 42.4 GB | Near unquantized performance at vastly reduced size, recommended (if you can run it..). |
4_25 | 4.25 | 6.0 | 25.9 GB | 27.4 GB | 29.4 GB | GPTQ equivalent bits per weight, slightly higher quality. |
3_75 | 3.5 | 6.0 | 23.0 GB | 24.5 GB | 26.5 GB | Lower quality, but pretty usable. Good for 4k context on 24GB. |
3_5 | 3.5 | 6.0 | 21.5 GB | 23.0 GB | 25.0 GB | Lower quality, only use if you need more context on 24GB. |
3_0 | 3.0 | 6.0 | 18.9 GB | 20.4 GB | 22.4 GB | Very low quality, pushes context to max but likely unusable. |
Download instructions
With git:
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/mixtral-instruct-0.1-laser-exl2
With huggingface hub (credit to TheBloke for instructions):
pip3 install huggingface-hub
To download the main
(only useful if you only care about measurement.json) branch to a folder called mixtral-instruct-0.1-laser-exl2
:
mkdir mixtral-instruct-0.1-laser-exl2
huggingface-cli download bartowski/mixtral-instruct-0.1-laser-exl2 --local-dir mixtral-instruct-0.1-laser-exl2 --local-dir-use-symlinks False
To download from a different branch, add the --revision
parameter:
Linux:
mkdir mixtral-instruct-0.1-laser-exl2-6_5
huggingface-cli download bartowski/mixtral-instruct-0.1-laser-exl2 --revision 6_5 --local-dir mixtral-instruct-0.1-laser-exl2-6_5 --local-dir-use-symlinks False
Windows (which apparently doesn't like _ in folders sometimes?):
mkdir mixtral-instruct-0.1-laser-exl2-6.5
huggingface-cli download bartowski/mixtral-instruct-0.1-laser-exl2 --revision 6_5 --local-dir mixtral-instruct-0.1-laser-exl2-6.5 --local-dir-use-symlinks False
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model authors have turned it off explicitly.