Disclaimer: I don't know what I'm doing.

Original Model: https://huggingface.co/Qwen/QwQ-32B

QwQ 32B EXL2 Size
8.0bpw 33.5 GB
7.0bpw 29.6 GB
6.5bpw 27.5 GB
6.0bpw 25.6 GB
5.5bpw 23.6 GB
5.0bpw 21.7 GB
4.5bpw 19.7 GB
4.0bpw 17.8 GB
3.75bpw 16.8 GB
3.5bpw WIP
Downloads last month
2
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for cshared/Qwen-QwQ-32B-8.0bpw-exl2

Base model

Qwen/Qwen2.5-32B
Finetuned
Qwen/QwQ-32B
Quantized
(70)
this model

Collection including cshared/Qwen-QwQ-32B-8.0bpw-exl2