https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501
Q6_K_XL: Q6_K weights, F16 output, F16 embed
Fits 24K CTX on a 24GiB GPU
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.