File size: 581 Bytes
a70a8d0
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
---
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Mistral-7B-Base-SFT-RDPO
---

# QuantFactory/Mistral-7B-Base-SFT-RDPO-GGUF
This is quantized version of [princeton-nlp/Mistral-7B-Base-SFT-RDPO](https://huggingface.co/princeton-nlp/Mistral-7B-Base-SFT-RDPO) created using llama.cpp

# Model Description
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)*  Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.