--- language: - en - fr - de - es - it - pt - zh - ja - ru - ko license: other license_name: mrl base_model: mistralai/Pixtral-Large-Instruct-2411 base_model_relation: quantized inference: false license_link: https://mistral.ai/licenses/MRL-0.1.md library_name: transformers pipeline_tag: image-text-to-text --- # Pixtral-Large-Instruct-2411 🧡 ExLlamaV2 3.0bpw Quant 3.0bpw quant of [Pixtral-Large-Instruct](https://huggingface.co/nintwentydo/Pixtral-Large-Instruct-2411). Vision inputs working on dev branch of [ExLlamaV2](https://github.com/turboderp/exllamav2/tree/dev). ## Tokenizer And Prompt Template Using conversion of v7m1 tokenizer with 32k vocab size. Chat template in chat_template.json uses the v7 instruct template: ``` [SYSTEM_PROMPT] [/SYSTEM_PROMPT][INST] [/INST] [INST] [/INST] ``` ## Available Sizes | Repo | Bits | Head Bits | Size | | ----------- | ------ | ------ | ------ | | nintwentydo/Pixtral-Large-Instruct-2411-exl2-2.5bpw | 2.5 | 6.0 | TBC | | [nintwentydo/Pixtral-Large-Instruct-2411-exl2-3.0bpw](https://huggingface.co/nintwentydo/Pixtral-Large-Instruct-2411-exl2-3.0bpw) | 3.0 | 6.0 | 46.42 GB | | [nintwentydo/Pixtral-Large-Instruct-2411-exl2-4.0bpw](https://huggingface.co/nintwentydo/Pixtral-Large-Instruct-2411-exl2-4.0bpw) | 4.0 | 6.0 | 60.61GB | | nintwentydo/Pixtral-Large-Instruct-2411-exl2-5.0bpw | 5.0 | 6.0 | TBC |