https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501

Q6_K_XL: Q6_K weights, F16 output, F16 embed

Fits 24K CTX on a 24GiB GPU

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.