Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
legraphista
/
Mistral-Large-Instruct-2407-IMat-GGUF
like
29
Text Generation
GGUF
10 languages
quantized
GGUF
quantization
imat
imatrix
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
License:
mrl
Model card
Files
Files and versions
Community
Use this model
c99c24e
Mistral-Large-Instruct-2407-IMat-GGUF
/
Mistral-Large-Instruct-2407.Q8_0
1 contributor
History:
6 commits
legraphista
Upload Mistral-Large-Instruct-2407.Q8_0/Mistral-Large-Instruct-2407.Q8_0-00002-of-00006.gguf with huggingface_hub
f5dcfa5
verified
5 months ago
Mistral-Large-Instruct-2407.Q8_0-00001-of-00006.gguf
Safe
24 GB
LFS
Upload Mistral-Large-Instruct-2407.Q8_0/Mistral-Large-Instruct-2407.Q8_0-00001-of-00006.gguf with huggingface_hub
5 months ago
Mistral-Large-Instruct-2407.Q8_0-00002-of-00006.gguf
Safe
23.9 GB
LFS
Upload Mistral-Large-Instruct-2407.Q8_0/Mistral-Large-Instruct-2407.Q8_0-00002-of-00006.gguf with huggingface_hub
5 months ago
Mistral-Large-Instruct-2407.Q8_0-00003-of-00006.gguf
Safe
23.9 GB
LFS
Upload Mistral-Large-Instruct-2407.Q8_0/Mistral-Large-Instruct-2407.Q8_0-00003-of-00006.gguf with huggingface_hub
5 months ago
Mistral-Large-Instruct-2407.Q8_0-00004-of-00006.gguf
Safe
23.9 GB
LFS
Upload Mistral-Large-Instruct-2407.Q8_0/Mistral-Large-Instruct-2407.Q8_0-00004-of-00006.gguf with huggingface_hub
5 months ago
Mistral-Large-Instruct-2407.Q8_0-00005-of-00006.gguf
Safe
23.9 GB
LFS
Upload Mistral-Large-Instruct-2407.Q8_0/Mistral-Large-Instruct-2407.Q8_0-00005-of-00006.gguf with huggingface_hub
5 months ago
Mistral-Large-Instruct-2407.Q8_0-00006-of-00006.gguf
Safe
10.7 GB
LFS
Upload Mistral-Large-Instruct-2407.Q8_0/Mistral-Large-Instruct-2407.Q8_0-00006-of-00006.gguf with huggingface_hub
5 months ago