roleplaiapp/QwQ-32B-Preview-Q8_0-GGUF

Repo: roleplaiapp/QwQ-32B-Preview-Q8_0-GGUF
Original Model: QwQ-32B-Preview Organization: Qwen Quantized File: qwq-32b-preview-q8_0.gguf Quantization: GGUF Quantization Method: Q8_0
Use Imatrix: False
Split Model: False

Overview

This is an GGUF Q8_0 quantized version of QwQ-32B-Preview.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai

Downloads last month
22
GGUF
Model size
32.8B params
Architecture
qwen2

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for roleplaiapp/QwQ-32B-Preview-Q8_0-GGUF

Base model

Qwen/Qwen2.5-32B
Quantized
(126)
this model