QuantFactory/SOVL_Llama3_8B-GGUF

This is quantized version of ResplendentAI/SOVL_Llama3_8B created using llama.cpp

Model Description

image/png

I'm not gonna tell you this is the best model anyone has ever made. I'm not going to tell you that you will love chatting with SOVL.

What I am gonna say is thank you for taking the time out of your day. Without users like you, my work would be meaningless.

Downloads last month
4
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/SOVL_Llama3_8B-GGUF