Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
calme-2.2-qwen2-72b-GGUF
like
2
Text Generation
GGUF
qwen
qwen-2
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
16-bit
GGUF
imatrix
conversational
License:
tongyi-qianwen
Model card
Files
Files and versions
Community
8
Use this model
4b688e6
calme-2.2-qwen2-72b-GGUF
1 contributor
History:
5 commits
MaziyarPanahi
[WIP] Upload folder using huggingface_hub (multi-commit 13e5db1be43eff70e0c48e97d3fdd8ea4ad79152c5f8b634c7b503bab56f8482) (
#5
)
4b688e6
verified
4 months ago
.gitattributes
2.62 kB
[WIP] Upload folder using huggingface_hub (multi-commit 13e5db1be43eff70e0c48e97d3fdd8ea4ad79152c5f8b634c7b503bab56f8482) (#5)
4 months ago
README.md
3.05 kB
Create README.md (#1)
4 months ago
calme-2.2-qwen2-72b.IQ1_M.gguf
23.7 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
calme-2.2-qwen2-72b.IQ1_S.gguf
22.7 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
calme-2.2-qwen2-72b.IQ2_XS.gguf
27.1 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
calme-2.2-qwen2-72b.IQ3_XS.gguf
32.8 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
calme-2.2-qwen2-72b.IQ4_XS.gguf
39.7 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
calme-2.2-qwen2-72b.Q2_K.gguf
29.8 GB
LFS
[WIP] Upload folder using huggingface_hub (multi-commit fe899af0c28a490ee2a17c2ed125673df3ef1716ff4cc195947fe3d7c12a1410) (#3)
4 months ago
calme-2.2-qwen2-72b.Q3_K_L.gguf
39.5 GB
LFS
[WIP] Upload folder using huggingface_hub (multi-commit fe899af0c28a490ee2a17c2ed125673df3ef1716ff4cc195947fe3d7c12a1410) (#3)
4 months ago
calme-2.2-qwen2-72b.Q3_K_M.gguf
37.7 GB
LFS
[WIP] Upload folder using huggingface_hub (multi-commit fe899af0c28a490ee2a17c2ed125673df3ef1716ff4cc195947fe3d7c12a1410) (#3)
4 months ago
calme-2.2-qwen2-72b.Q3_K_S.gguf
34.5 GB
LFS
[WIP] Upload folder using huggingface_hub (multi-commit fe899af0c28a490ee2a17c2ed125673df3ef1716ff4cc195947fe3d7c12a1410) (#3)
4 months ago
calme-2.2-qwen2-72b.Q4_K_M.gguf
47.4 GB
LFS
[WIP] Upload folder using huggingface_hub (multi-commit fe899af0c28a490ee2a17c2ed125673df3ef1716ff4cc195947fe3d7c12a1410) (#3)
4 months ago
calme-2.2-qwen2-72b.Q4_K_S.gguf
43.9 GB
LFS
[WIP] Upload folder using huggingface_hub (multi-commit fe899af0c28a490ee2a17c2ed125673df3ef1716ff4cc195947fe3d7c12a1410) (#3)
4 months ago
calme-2.2-qwen2-72b.Q5_K_M.gguf-00001-of-00008.gguf
8.08 GB
LFS
[WIP] Upload folder using huggingface_hub (multi-commit 13e5db1be43eff70e0c48e97d3fdd8ea4ad79152c5f8b634c7b503bab56f8482) (#5)
4 months ago
calme-2.2-qwen2-72b.Q5_K_M.gguf-00002-of-00008.gguf
7.19 GB
LFS
[WIP] Upload folder using huggingface_hub (multi-commit 13e5db1be43eff70e0c48e97d3fdd8ea4ad79152c5f8b634c7b503bab56f8482) (#5)
4 months ago
calme-2.2-qwen2-72b.Q5_K_M.gguf-00003-of-00008.gguf
6.69 GB
LFS
[WIP] Upload folder using huggingface_hub (multi-commit 13e5db1be43eff70e0c48e97d3fdd8ea4ad79152c5f8b634c7b503bab56f8482) (#5)
4 months ago
calme-2.2-qwen2-72b.Q5_K_M.gguf-00004-of-00008.gguf
6.68 GB
LFS
[WIP] Upload folder using huggingface_hub (multi-commit 13e5db1be43eff70e0c48e97d3fdd8ea4ad79152c5f8b634c7b503bab56f8482) (#5)
4 months ago