Rethinking Diverse Human Preference Learning through Principal Component Analysis
Abstract
Understanding human preferences is crucial for improving foundation models and building personalized AI systems. However, preferences are inherently diverse and complex, making it difficult for traditional reward models to capture their full range. While fine-grained preference data can help, collecting it is expensive and hard to scale. In this paper, we introduce Decomposed Reward Models (DRMs), a novel approach that extracts diverse human preferences from binary comparisons without requiring fine-grained annotations. Our key insight is to represent human preferences as vectors and analyze them using Principal Component Analysis (PCA). By constructing a dataset of embedding differences between preferred and rejected responses, DRMs identify orthogonal basis vectors that capture distinct aspects of preference. These decomposed rewards can be flexibly combined to align with different user needs, offering an interpretable and scalable alternative to traditional reward models. We demonstrate that DRMs effectively extract meaningful preference dimensions (e.g., helpfulness, safety, humor) and adapt to new users without additional training. Our results highlight DRMs as a powerful framework for personalized and interpretable LLM alignment.
Community
This paper introduces Decomposed Reward Models (DRMs), a novel approach that extracts diverse human preferences from binary comparisons through PCA, providing a powerful framework for understanding complex human preferences and personalized LLM alignment.
I think this paper should properly cite the work: Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment. https://huggingface.co/papers/2410.02197 (https://arxiv.org/abs/2410.02197)
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Survey of Personalized Large Language Models: Progress and Future Directions (2025)
- Data-adaptive Safety Rules for Training Reward Models (2025)
- RAG-Reward: Optimizing RAG with Reward Modeling and RLHF (2025)
- Multimodal Preference Data Synthetic Alignment with Reward Model (2024)
- Clear Preferences Leave Traces: Reference Model-Guided Sampling for Preference Learning (2025)
- Disentangling Preference Representation and Text Generation for Efficient Individual Preference Alignment (2024)
- Preference VLM: Leveraging VLMs for Scalable Preference-Based Reinforcement Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper