|
--- |
|
license: other |
|
license_name: mrl |
|
license_link: https://mistral.ai/licenses/MRL-0.1.md |
|
language: |
|
- en |
|
- fr |
|
- de |
|
- es |
|
- it |
|
- pt |
|
- zh |
|
- ja |
|
- ru |
|
- ko |
|
|
|
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. |
|
--- |
|
|
|
Quantized model => https://huggingface.co/mistralai/Mistral-Large-Instruct-2407 |
|
|
|
**Quantization Details:** |
|
Quantization is done using turboderp's ExLlamaV2 v0.2.2. |
|
|
|
I use the default calibration datasets and arguments. The repo also includes a "measurement.json" file, which was used during the quantization process. |
|
|
|
For models with bits per weight (BPW) over 6.0, I default to quantizing the `lm_head` layer at 8 bits instead of the standard 6 bits. |
|
|
|
|
|
|
|
--- |
|
|
|
**Who are you? What's with these weird BPWs on [insert model here]?** |
|
I specialize in optimized EXL2 quantization for models in the 70B to 100B+ range, specifically tailored for 48GB VRAM setups. My rig is built using 2 x 3090s with a Ryzen APU (APU used solely for desktop output—no VRAM wasted on the 3090s). I use TabbyAPI for inference, targeting context sizes between 32K and 64K. |
|
|
|
Every model I upload includes a `config.yml` file with my ideal TabbyAPI settings. If you're using my config, don’t forget to set `PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync` to save some VRAM. |