roleplaiapp's picture
Upload README.md with huggingface_hub
9400d49 verified
metadata
license_link: https://huggingface.co/Qwen/QwQ-32B-Preview/blob/main/LICENSE
language:
  - en
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
  - llama-cpp
  - QwQ-32B-Preview
  - gguf
  - Q5_K_M
  - 32b
  - QwQ
  - qwen-2
  - llama-cpp
  - Qwen
  - code
  - math
  - chat
  - roleplay
  - text-generation
  - safetensors
  - nlp
  - code
library_name: transformers
pipeline_tag: text-generation

roleplaiapp/QwQ-32B-Preview-Q5_K_M-GGUF

Repo: roleplaiapp/QwQ-32B-Preview-Q5_K_M-GGUF
Original Model: QwQ-32B-Preview Organization: Qwen Quantized File: qwq-32b-preview-q5_k_m.gguf Quantization: GGUF Quantization Method: Q5_K_M
Use Imatrix: False
Split Model: False

Overview

This is an GGUF Q5_K_M quantized version of QwQ-32B-Preview.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai