Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
legraphista
/
Qwen2.5-32B-Instruct-IMat-GGUF
like
0
Text Generation
GGUF
English
chat
quantized
GGUF
quantization
imat
imatrix
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Use this model
a8c803d
Qwen2.5-32B-Instruct-IMat-GGUF
1 contributor
History:
34 commits
legraphista
Upload Qwen2.5-32B-Instruct.Q2_K_S.gguf with huggingface_hub
a8c803d
verified
3 months ago
Qwen2.5-32B-Instruct.BF16
Upload Qwen2.5-32B-Instruct.BF16/Qwen2.5-32B-Instruct.BF16-00001-of-00003.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.FP16
Upload Qwen2.5-32B-Instruct.FP16/Qwen2.5-32B-Instruct.FP16-00001-of-00003.gguf with huggingface_hub
3 months ago
.gitattributes
Safe
2.96 kB
Upload Qwen2.5-32B-Instruct.Q2_K_S.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.Q2_K.gguf
Safe
12.3 GB
LFS
Upload Qwen2.5-32B-Instruct.Q2_K.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.Q2_K_S.gguf
Safe
11.5 GB
LFS
Upload Qwen2.5-32B-Instruct.Q2_K_S.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.Q3_K.gguf
Safe
15.9 GB
LFS
Upload Qwen2.5-32B-Instruct.Q3_K.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.Q3_K_L.gguf
Safe
17.2 GB
LFS
Upload Qwen2.5-32B-Instruct.Q3_K_L.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.Q3_K_S.gguf
Safe
14.4 GB
LFS
Upload Qwen2.5-32B-Instruct.Q3_K_S.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.Q4_K.gguf
Safe
19.9 GB
LFS
Upload Qwen2.5-32B-Instruct.Q4_K.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.Q4_K_S.gguf
Safe
18.8 GB
LFS
Upload Qwen2.5-32B-Instruct.Q4_K_S.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.Q5_K.gguf
Safe
23.3 GB
LFS
Upload Qwen2.5-32B-Instruct.Q5_K.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.Q5_K_S.gguf
Safe
22.6 GB
LFS
Upload Qwen2.5-32B-Instruct.Q5_K_S.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.Q6_K.gguf
Safe
26.9 GB
LFS
Upload Qwen2.5-32B-Instruct.Q6_K.gguf with huggingface_hub
3 months ago
Qwen2.5-32B-Instruct.Q8_0.gguf
Safe
34.8 GB
LFS
Upload Qwen2.5-32B-Instruct.Q8_0.gguf with huggingface_hub
3 months ago
README.md
Safe
8.56 kB
Upload README.md with huggingface_hub
3 months ago
imatrix.dat
15 MB
LFS
Upload imatrix.dat with huggingface_hub
3 months ago
imatrix.dataset
Safe
280 kB
Upload imatrix.dataset with huggingface_hub
3 months ago
imatrix.log
Safe
10.3 kB
Upload imatrix.log with huggingface_hub
3 months ago