Direct Preference Optimization: Your Language Model is Secretly a Reward Model Paper • 2305.18290 • Published May 29, 2023 • 48
Fine-Grained Human Feedback Gives Better Rewards for Language Model Training Paper • 2306.01693 • Published Jun 2, 2023 • 3
Secrets of RLHF in Large Language Models Part II: Reward Modeling Paper • 2401.06080 • Published Jan 11 • 26