The Llama3-8b-based Reward Model was trained using OpenRLHF and a combination of datasets available at https://huggingface.co/datasets/OpenLLMAI/preference_dataset_mixture2_and_safe_pku. ``` Cosine Scheduler Learning Rate: 9e-6 Warmup Ratio: 0.03 Batch Size: 256 Epoch: 1 ```