LLaMA3-SFT / README.md
Haoxiang-Wang's picture
Update README.md
d770fe5 verified
|
raw
history blame
766 Bytes
metadata
library_name: transformers
tags: []

This is the SFT checkpoint used for the project RLHFlow/Online-RLHF

The model is trained from meta-llama/Meta-Llama-3-8B on a mixture of diverse open-source high-quality data for 1 epoch with detailed parameters in the report. It has not been trained by RLHF and can serve as a good starting point for the RLHF research.