Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
legraphista
/
Mistral-Large-Instruct-2407-IMat-GGUF
like
29
Text Generation
GGUF
10 languages
quantized
GGUF
quantization
imat
imatrix
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
License:
mrl
Model card
Files
Files and versions
Community
Use this model
c99c24e
Mistral-Large-Instruct-2407-IMat-GGUF
/
Mistral-Large-Instruct-2407.Q4_K
1 contributor
History:
3 commits
legraphista
Upload Mistral-Large-Instruct-2407.Q4_K/Mistral-Large-Instruct-2407.Q4_K-00002-of-00004.gguf with huggingface_hub
c99c24e
verified
5 months ago
Mistral-Large-Instruct-2407.Q4_K-00001-of-00004.gguf
Safe
23.8 GB
LFS
Upload Mistral-Large-Instruct-2407.Q4_K/Mistral-Large-Instruct-2407.Q4_K-00001-of-00004.gguf with huggingface_hub
5 months ago
Mistral-Large-Instruct-2407.Q4_K-00002-of-00004.gguf
Safe
23.8 GB
LFS
Upload Mistral-Large-Instruct-2407.Q4_K/Mistral-Large-Instruct-2407.Q4_K-00002-of-00004.gguf with huggingface_hub
5 months ago
Mistral-Large-Instruct-2407.Q4_K-00004-of-00004.gguf
Safe
1.59 GB
LFS
Upload Mistral-Large-Instruct-2407.Q4_K/Mistral-Large-Instruct-2407.Q4_K-00004-of-00004.gguf with huggingface_hub
5 months ago