weqweasdas
commited on
Commit
•
465c80f
1
Parent(s):
0a31e34
Update README.md
Browse files
README.md
CHANGED
@@ -11,3 +11,27 @@ This is the SFT checkpoint used for the project [RLHFlow/Online-RLHF](https://gi
|
|
11 |
|
12 |
The model is trained from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on a mixture of diverse open-source high-quality data for 1 epoch with detailed parameters in the report. It has not been trained by RLHF and can serve as a good starting point for the RLHF research.
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
The model is trained from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on a mixture of diverse open-source high-quality data for 1 epoch with detailed parameters in the report. It has not been trained by RLHF and can serve as a good starting point for the RLHF research.
|
13 |
|
14 |
+
|
15 |
+
## Academic Benchmarks
|
16 |
+
|
17 |
+
| **Model** | **Size** | **Method** | **LC AlpacaEval** | **MT-Bench** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** |
|
18 |
+
|----------------------------|----------|-----------------|------------|------------|------------|----------|---------------|----------------|---------|----------|
|
19 |
+
| LLaMA-3-8B-it | 8B | RS+DPO+PPO |22.9|8.16| 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 |
|
20 |
+
| Ours (SFT baseline) | 8B | SFT |10.2|7.69| 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 |
|
21 |
+
| Ours (Iterative RLHF) | 8B | Iterative DPO |37.2|8.46| 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 |
|
22 |
+
|
23 |
+
|
24 |
+
|
25 |
+
## Citation
|
26 |
+
Please cite our techical report if you find our model is useful for your research or product.
|
27 |
+
```
|
28 |
+
@misc{dong2024rlhf,
|
29 |
+
title={RLHF Workflow: From Reward Modeling to Online RLHF},
|
30 |
+
author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
|
31 |
+
year={2024},
|
32 |
+
eprint={2405.07863},
|
33 |
+
archivePrefix={arXiv},
|
34 |
+
primaryClass={cs.LG}
|
35 |
+
}
|
36 |
+
|
37 |
+
```
|