Haoxiang-Wang
commited on
Commit
•
d770fe5
1
Parent(s):
ead1cff
Update README.md
Browse files
README.md
CHANGED
@@ -3,7 +3,11 @@ library_name: transformers
|
|
3 |
tags: []
|
4 |
---
|
5 |
|
6 |
-
This is the SFT checkpoint used for the project [Online-RLHF](https://github.com/RLHFlow/Online-RLHF)
|
|
|
|
|
|
|
|
|
7 |
|
8 |
The model is trained from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on a mixture of diverse open-source high-quality data for 1 epoch with detailed parameters in the report. It has not been trained by RLHF and can serve as a good starting point for the RLHF research.
|
9 |
|
|
|
3 |
tags: []
|
4 |
---
|
5 |
|
6 |
+
This is the SFT checkpoint used for the project [RLHFlow/Online-RLHF](https://github.com/RLHFlow/Online-RLHF)
|
7 |
+
|
8 |
+
* **Technical Report**: [RLHF Workflow: From Reward Modeling to Online RLHF](https://arxiv.org/pdf/2405.07863)
|
9 |
+
* **Authors**: Hanze Dong*, Wei Xiong*, Bo Pang*, Haoxiang Wang*, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, Tong Zhang
|
10 |
+
* **Code**: https://github.com/RLHFlow/Online-RLHF
|
11 |
|
12 |
The model is trained from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on a mixture of diverse open-source high-quality data for 1 epoch with detailed parameters in the report. It has not been trained by RLHF and can serve as a good starting point for the RLHF research.
|
13 |
|