RS-DPO: A Hybrid Rejection Sampling and Direct Preference Optimization Method for Alignment of Large Language Models Paper • 2402.10038 • Published Feb 15, 2024 • 6