Qwen2.5-32B-Instruct - EXL2 7.0bpw

This is a 7.0bpw EXL2 quant of Qwen/Qwen2.5-32B-Instruct

Details about the model can be found at the above model page.

EXL2 Version

These quants were made with exllamav2 version 0.2.4. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.

If you have problems loading these models, please update Text Generation WebUI to the latest version.

Downloads last month
2
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Dracones/Qwen2.5-32B-Instruct_exl2_7.0bpw

Base model

Qwen/Qwen2.5-32B
Quantized
(126)
this model

Collection including Dracones/Qwen2.5-32B-Instruct_exl2_7.0bpw