Edit model card

Model Information

This uses the Llama 3.2 1B model as a starting point and uses the project1-v1 dataset.

Our latest model uses a combination of SFT and DPO to achieve superior results than our initial experiments!

Please let us know what you think by opening a discussion in the Community tab!

Downloads last month
359
GGUF
Model size
1.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mav23/llama-v1-GGUF

Quantized
(141)
this model