LLaMA3-SFT / README.md
weqweasdas's picture
Update README.md
465c80f verified
|
raw
history blame
2.06 kB
metadata
library_name: transformers
tags: []

This is the SFT checkpoint used for the project RLHFlow/Online-RLHF

The model is trained from meta-llama/Meta-Llama-3-8B on a mixture of diverse open-source high-quality data for 1 epoch with detailed parameters in the report. It has not been trained by RLHF and can serve as a good starting point for the RLHF research.

Academic Benchmarks

Model Size Method LC AlpacaEval MT-Bench GSM-8K MMLU HumanEval TruthfulQA ARC MBPP
LLaMA-3-8B-it 8B RS+DPO+PPO 22.9 8.16 79.6 66.0 61.6 43.9 59.5 61.1
Ours (SFT baseline) 8B SFT 10.2 7.69 74.2 64.7 65.2 53.4 61.4 62.3
Ours (Iterative RLHF) 8B Iterative DPO 37.2 8.46 80.7 65.3 64.6 60.4 64.3 60.8

Citation

Please cite our techical report if you find our model is useful for your research or product.

@misc{dong2024rlhf,
      title={RLHF Workflow: From Reward Modeling to Online RLHF}, 
      author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
      year={2024},
      eprint={2405.07863},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}