Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
Mistral-7B-Instruct-v0.3-GGUF
like
63
Text Generation
Transformers
GGUF
Safetensors
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
conversational
Inference Endpoints
text-generation-inference
imatrix
License:
apache-2.0
Model card
Files
Files and versions
Community
10
Train
Deploy
Use this model
Add memory usage of each Quantization Methods
#9
by
ar08
- opened
Aug 4
Discussion
ar08
Aug 4
You can See
@
TheBloke
any repo
See translation
Edit
Preview
Upload images, audio, and videos by dragging in the text input, pasting, or
clicking here
.
Tap or paste here to upload images
Comment
·
Sign up
or
log in
to comment