File size: 2,742 Bytes
7dcb18d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
---
base_model: snorkelai/Snorkel-Mistral-PairRM-DPO
datasets:
- snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
- HuggingFaceH4/ultrafeedback_binarized
license: apache-2.0
language:
- en
model_creator: snorkelai
model_name: Snorkel-Mistral-PairRM-DPO
model_type: mistral
inference: false
pipeline_tag: text-generation
prompt_template: |
<|im_start|>system
{{system_message}}<|im_end|>
<|im_start|>user
{{prompt}}<|im_end|>
<|im_start|>assistant
quantized_by: brittlewis12
---
# Snorkel-Mistral-PairRM-DPO GGUF
Original model: [Snorkel-Mistral-PairRM-DPO](https://huggingface.co/snorkelai/Snorkel-Mistral-PairRM-DPO)
Model creator: [Snorkel AI](https://huggingface.co/snorkelai)
This repo contains GGUF format model files for Snorkel AI’s Snorkel-Mistral-PairRM-DPO.
> With this demonstration, we focus on the general approach to alignment. Thus, we use a general-purpose reward model - the performant PairRM model. We use the Mistral-7B-Instruct-v0.2 model as our base LLM.
### What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Converted using llama.cpp b1960 ([26d6076](https://github.com/ggerganov/llama.cpp/commits/26d607608d794efa56df3bdb6043a2f94c1d632c))
### Prompt template: ChatML
```
<|im_start|>system
{{system_message}}<|im_end|>
<|im_start|>user
{{prompt}}<|im_end|>
<|im_start|>assistant
```
---
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!
![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg)
[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
---
## Original Model Evaluations:
> On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/):
> - The base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) scored **14.72**.
>
> After applying the above methodology:
> - This model scored **30.22** - ranked 3rd and the highest for an open-source base model at the time of publication.
|