|
--- |
|
license: mit |
|
--- |
|
|
|
# News |
|
|
|
- [2023/09/26]: UltraRM unleashes the power of [UltraLM-13B-v2.0](https://huggingface.co/openbmb/UltraLM-13b-v2.0) and [UltraLM-13B](https://huggingface.co/openbmb/UltraLM-13b)! A simple best-of-16 sampling achieves **92.30%** (UltraLM2, π₯ in 13B results) and **91.54%** (UltraLM, π₯ in LLaMA-1 results) win rates against text-davinci-003 on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark! |
|
- [2023/09/26]: We release the [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, along with UltraFeedback-powered reward model [UltraRM](https://huggingface.co/datasets/openbmb/UltraFeedback) and critique model [UltraCM](https://huggingface.co/datasets/openbmb/UltraCM-13b)! Both built **new SOTAs** over open-source models! |
|
|
|
# Links |
|
|
|
- π€ [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) |
|
- π€ [UltraRM](https://huggingface.co/datasets/openbmb/UltraRM-13b) |
|
- π€ [UltraCM](https://huggingface.co/datasets/openbmb/UltraCM-13b) |
|
|
|
# UltraRM |
|
|
|
We train and release a reward model UltraRM based on UltraFeedback to further facilitate alignment research. UltraRM is initialized by LLaMA2-13B. |
|
|
|
Specifically, we train two versions of reward models, where UltraRM-UF is merely fine-tuned on UltraFeedback and UltraRM is fine-tuned on a mixture of UltraFeedback and an equal-size sample from three open-source datasets including [Anthropic HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), [Standford SHP](https://huggingface.co/datasets/stanfordnlp/SHP), and [Summarization](https://huggingface.co/datasets/openai/summarize_from_feedback). |
|
|
|
## Reward Modeling |
|
|
|
On four public preference test sets, our UltraRM achieves SOTA over other open-source reward models. |
|
|
|
## Usage |
|
|