Edit model card

Qwen2.5-14B-All-Variants-q8_0-q6_K-GGUF

This repo contains GGUF quantizations of Qwen/Qwen2.5-14B, Qwen/Qwen2.5-14B-Instruct, and Qwen/Qwen2.5-Coder-14B-Instruct models at q6_K, using q8_0 for output and embedding tensors.

Downloads last month
440
GGUF
Model size
14.8B params
Architecture
qwen2

6-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for ddh0/Qwen2.5-14B-All-Variants-q8_0-q6_K-GGUF

Base model

Qwen/Qwen2.5-14B
Quantized
(46)
this model