The Llama3-8b-based Reward Model was trained using OpenRLHF and a combination of datasets available at https://huggingface.co/datasets/OpenLLMAI/preference_dataset_mixture2_and_safe_pku. | |
Base model: https://huggingface.co/OpenRLHF/Llama-3-8b-sft-mixture | |
``` | |
Cosine Scheduler | |
Learning Rate: 9e-6 | |
Warmup Ratio: 0.03 | |
Batch Size: 256 | |
Epoch: 1 | |
``` | |