Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Neko-Institute-of-Science
/
turbcat-instruct-72b-GGUF
like
0
GGUF
Inference Endpoints
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
main
turbcat-instruct-72b-GGUF
1 contributor
History:
3 commits
Neko-Institute-of-Science
Upload folder using huggingface_hub
1f7927c
verified
4 months ago
.gitattributes
2.1 kB
Upload folder using huggingface_hub
4 months ago
Qwen2-72B-Instruct-F16-00001-of-00003.gguf
49.9 GB
LFS
Upload folder using huggingface_hub
4 months ago
Qwen2-72B-Instruct-F16-00002-of-00003.gguf
49.6 GB
LFS
Upload folder using huggingface_hub
4 months ago
Qwen2-72B-Instruct-F16-00003-of-00003.gguf
45.9 GB
LFS
Upload folder using huggingface_hub
4 months ago
ggml-model-Q4_0.gguf
41.2 GB
LFS
Upload folder using huggingface_hub
4 months ago
ggml-model-Q6_k-00001-of-00002.gguf
50 GB
LFS
Upload folder using huggingface_hub
4 months ago
ggml-model-Q6_k-00002-of-00002.gguf
14.4 GB
LFS
Upload folder using huggingface_hub
4 months ago
ggml-model-Q8_0-00001-of-00002.gguf
49.8 GB
LFS
Upload folder using huggingface_hub
4 months ago
ggml-model-Q8_0-00002-of-00002.gguf
27.4 GB
LFS
Upload folder using huggingface_hub
4 months ago