Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
legraphista
/
Qwen2.5-32B-Instruct-IMat-GGUF
like
0
Text Generation
GGUF
English
chat
quantized
GGUF
quantization
imat
imatrix
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Use this model
7751364
Qwen2.5-32B-Instruct-IMat-GGUF
1 contributor
History:
45 commits
legraphista
Upload README.md with huggingface_hub
7751364
verified
4 months ago
Qwen2.5-32B-Instruct.BF16
Upload Qwen2.5-32B-Instruct.BF16/Qwen2.5-32B-Instruct.BF16-00001-of-00003.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.FP16
Upload Qwen2.5-32B-Instruct.FP16/Qwen2.5-32B-Instruct.FP16-00001-of-00003.gguf with huggingface_hub
4 months ago
.gitattributes
3.31 kB
Upload Qwen2.5-32B-Instruct.IQ3_XS.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.IQ3_M.gguf
14.8 GB
LFS
Upload Qwen2.5-32B-Instruct.IQ3_M.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.IQ3_S.gguf
14.4 GB
LFS
Upload Qwen2.5-32B-Instruct.IQ3_S.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.IQ3_XS.gguf
13.7 GB
LFS
Upload Qwen2.5-32B-Instruct.IQ3_XS.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.IQ3_XXS.gguf
12.8 GB
LFS
Upload Qwen2.5-32B-Instruct.IQ3_XXS.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.IQ4_NL.gguf
18.7 GB
LFS
Upload Qwen2.5-32B-Instruct.IQ4_NL.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.IQ4_XS.gguf
17.7 GB
LFS
Upload Qwen2.5-32B-Instruct.IQ4_XS.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.Q2_K.gguf
12.3 GB
LFS
Upload Qwen2.5-32B-Instruct.Q2_K.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.Q2_K_S.gguf
11.5 GB
LFS
Upload Qwen2.5-32B-Instruct.Q2_K_S.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.Q3_K.gguf
15.9 GB
LFS
Upload Qwen2.5-32B-Instruct.Q3_K.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.Q3_K_L.gguf
17.2 GB
LFS
Upload Qwen2.5-32B-Instruct.Q3_K_L.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.Q3_K_S.gguf
14.4 GB
LFS
Upload Qwen2.5-32B-Instruct.Q3_K_S.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.Q4_K.gguf
19.9 GB
LFS
Upload Qwen2.5-32B-Instruct.Q4_K.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.Q4_K_S.gguf
18.8 GB
LFS
Upload Qwen2.5-32B-Instruct.Q4_K_S.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.Q5_K.gguf
23.3 GB
LFS
Upload Qwen2.5-32B-Instruct.Q5_K.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.Q5_K_S.gguf
22.6 GB
LFS
Upload Qwen2.5-32B-Instruct.Q5_K_S.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.Q6_K.gguf
26.9 GB
LFS
Upload Qwen2.5-32B-Instruct.Q6_K.gguf with huggingface_hub
4 months ago
Qwen2.5-32B-Instruct.Q8_0.gguf
34.8 GB
LFS
Upload Qwen2.5-32B-Instruct.Q8_0.gguf with huggingface_hub
4 months ago
README.md
9.46 kB
Upload README.md with huggingface_hub
4 months ago
imatrix.dat
15 MB
LFS
Upload imatrix.dat with huggingface_hub
4 months ago
imatrix.dataset
280 kB
Upload imatrix.dataset with huggingface_hub
4 months ago
imatrix.log
10.3 kB
Upload imatrix.log with huggingface_hub
4 months ago