File size: 577 Bytes
8e50a10 |
1 2 3 4 5 6 7 8 9 10 11 12 |
---
library_name: transformers
pipeline_tag: text-generation
base_model: princeton-nlp/Mistral-7B-Instruct-DPO
---
# QuantFactory/Mistral-7B-Instruct-DPO-GGUF
This is quantized version of [princeton-nlp/Mistral-7B-Instruct-DPO](https://huggingface.co/princeton-nlp/Mistral-7B-Instruct-DPO) created using llama.cpp
# Model Description
This is a model released from the preprint: *[SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734)* Please refer to our [repository](https://github.com/princeton-nlp/SimPO) for more details.
|