Great Models Think Alike and this Undermines AI Oversight
Abstract
As Language Model (LM) capabilities advance, evaluating and supervising them at scale is getting harder for humans. There is hope that other language models can automate both these tasks, which we refer to as "AI Oversight". We study how model similarity affects both aspects of AI oversight by proposing a probabilistic metric for LM similarity based on overlap in model mistakes. Using this metric, we first show that LLM-as-a-judge scores favor models similar to the judge, generalizing recent self-preference results. Then, we study training on LM annotations, and find complementary knowledge between the weak supervisor and strong student model plays a crucial role in gains from "weak-to-strong generalization". As model capabilities increase, it becomes harder to find their mistakes, and we might defer more to AI oversight. However, we observe a concerning trend -- model mistakes are becoming more similar with increasing capabilities, pointing to risks from correlated failures. Our work underscores the importance of reporting and correcting for model similarity, especially in the emerging paradigm of AI oversight.
Community
Project page: https://model-similarity.github.io/
To try it out yourself, play with our interactive tool: https://huggingface.co/spaces/bethgelab/lm-similarity
Or run pip install lm-sim
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Enabling Scalable Oversight via Self-Evolving Critic (2025)
- Do Language Models Understand the Cognitive Tasks Given to Them? Investigations with the N-Back Paradigm (2024)
- AgentRefine: Enhancing Agent Generalization through Refinement Tuning (2025)
- The Superalignment of Superhuman Intelligence with Large Language Models (2024)
- Counterfactual Samples Constructing and Training for Commonsense Statements Estimation (2024)
- On the Reasoning Capacity of AI Models and How to Quantify It (2025)
- Predicting the Performance of Black-box LLMs through Self-Queries (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper