Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
legraphista
/
Mistral-Large-Instruct-2407-IMat-GGUF
like
29
Text Generation
GGUF
10 languages
quantized
GGUF
quantization
imat
imatrix
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
License:
mrl
Model card
Files
Files and versions
Community
Use this model
c99c24e
Mistral-Large-Instruct-2407-IMat-GGUF
/
Mistral-Large-Instruct-2407.Q6_K
1 contributor
History:
5 commits
legraphista
Upload Mistral-Large-Instruct-2407.Q6_K/Mistral-Large-Instruct-2407.Q6_K-00002-of-00005.gguf with huggingface_hub
4dc4bb8
verified
4 months ago
Mistral-Large-Instruct-2407.Q6_K-00001-of-00005.gguf
Safe
23.9 GB
LFS
Upload Mistral-Large-Instruct-2407.Q6_K/Mistral-Large-Instruct-2407.Q6_K-00001-of-00005.gguf with huggingface_hub
4 months ago
Mistral-Large-Instruct-2407.Q6_K-00002-of-00005.gguf
Safe
23.8 GB
LFS
Upload Mistral-Large-Instruct-2407.Q6_K/Mistral-Large-Instruct-2407.Q6_K-00002-of-00005.gguf with huggingface_hub
4 months ago
Mistral-Large-Instruct-2407.Q6_K-00003-of-00005.gguf
Safe
23.8 GB
LFS
Upload Mistral-Large-Instruct-2407.Q6_K/Mistral-Large-Instruct-2407.Q6_K-00003-of-00005.gguf with huggingface_hub
4 months ago
Mistral-Large-Instruct-2407.Q6_K-00004-of-00005.gguf
Safe
23.8 GB
LFS
Upload Mistral-Large-Instruct-2407.Q6_K/Mistral-Large-Instruct-2407.Q6_K-00004-of-00005.gguf with huggingface_hub
4 months ago
Mistral-Large-Instruct-2407.Q6_K-00005-of-00005.gguf
Safe
5.16 GB
LFS
Upload Mistral-Large-Instruct-2407.Q6_K/Mistral-Large-Instruct-2407.Q6_K-00005-of-00005.gguf with huggingface_hub
4 months ago