Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ConfidentialMind
/
Mistral-Small-24B-Instruct-2501_GPTQ_G128_W4A16_MSE
like
0
Follow
ConfidentialMind
3
Text Classification
Safetensors
neuralmagic/LLM_compression_calibration
English
mistral
gptq
quantization
4bit
confidentialmind
text-generation
apache2.0
mistral-small-24b
4-bit precision
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Train
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Any plan for 8bit version?
1
#1 opened 6 days ago by
jm4n21