Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
legraphista
/
Mistral-Large-Instruct-2407-IMat-GGUF
like
27
Text Generation
GGUF
10 languages
quantized
GGUF
quantization
imat
imatrix
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
License:
mrl
Model card
Files
Files and versions
Community
Use this model
c81754d
Mistral-Large-Instruct-2407-IMat-GGUF
/
Mistral-Large-Instruct-2407.Q4_K
1 contributor
History:
4 commits
legraphista
Upload Mistral-Large-Instruct-2407.Q4_K/Mistral-Large-Instruct-2407.Q4_K-00003-of-00004.gguf with huggingface_hub
3be9c09
verified
4 months ago
Mistral-Large-Instruct-2407.Q4_K-00001-of-00004.gguf
Safe
23.8 GB
LFS
Upload Mistral-Large-Instruct-2407.Q4_K/Mistral-Large-Instruct-2407.Q4_K-00001-of-00004.gguf with huggingface_hub
4 months ago
Mistral-Large-Instruct-2407.Q4_K-00002-of-00004.gguf
Safe
23.8 GB
LFS
Upload Mistral-Large-Instruct-2407.Q4_K/Mistral-Large-Instruct-2407.Q4_K-00002-of-00004.gguf with huggingface_hub
4 months ago
Mistral-Large-Instruct-2407.Q4_K-00003-of-00004.gguf
Safe
24 GB
LFS
Upload Mistral-Large-Instruct-2407.Q4_K/Mistral-Large-Instruct-2407.Q4_K-00003-of-00004.gguf with huggingface_hub
4 months ago
Mistral-Large-Instruct-2407.Q4_K-00004-of-00004.gguf
Safe
1.59 GB
LFS
Upload Mistral-Large-Instruct-2407.Q4_K/Mistral-Large-Instruct-2407.Q4_K-00004-of-00004.gguf with huggingface_hub
4 months ago