Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
bluuwhale
/
infinity-franken-GGUF-IQ-Imatrix
like
0
GGUF
Inference Endpoints
Model card
Files
Files and versions
Community
Deploy
Use this model
No model card
Downloads last month
30
GGUF
Model size
10.7B params
Architecture
llama
3-bit
IQ3_S
4-bit
Q4_K_M
5-bit
Q5_K_S
Q5_K_M
6-bit
Q6_K
8-bit
Q8_0
16-bit
F16
View +1 file
Inference API
Unable to determine this model's library. Check the
docs
.
Collection including
bluuwhale/infinity-franken-GGUF-IQ-Imatrix
GGUF Quantize Model 🖥️
Collection
GGUF Model Quantize Weight
•
5 items
•
Updated
Aug 5
•
1