MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs
Abstract
Multimodal Large Language Models (MLLMs) have experienced rapid progress in visual recognition tasks in recent years. Given their potential integration into many critical applications, it is important to understand the limitations of their visual perception. In this work, we study whether MLLMs can perceive small visual details as effectively as large ones when answering questions about images. We observe that their performance is very sensitive to the size of the visual subject of the question, and further show that this effect is in fact causal by conducting an intervention study. Next, we study the attention patterns of MLLMs when answering visual questions, and intriguingly find that they consistently know where to look, even when they provide the wrong answer. Based on these findings, we then propose training-free visual intervention methods that leverage the internal knowledge of any MLLM itself, in the form of attention and gradient maps, to enhance its perception of small visual details. We evaluate our proposed methods on two widely-used MLLMs and seven visual question answering benchmarks and show that they can significantly improve MLLMs' accuracy without requiring any training. Our results elucidate the risk of applying MLLMs to visual recognition tasks concerning small details and indicate that visual intervention using the model's internal state is a promising direction to mitigate this risk.
Community
In this paper, we found that MLLMs already know where to look—even when their final answers are wrong!
Inspired by this, we developed a method that tracks the model’s attention during questioning and then feeds a cropped sub-image back into the model, letting MLLMs double-check their focus before answering.
We found that this method can significantly improve MLLM's performance, especially on detail-sensitive scenarios without any training!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Visual-RAG: Benchmarking Text-to-Image Retrieval Augmented Generation for Visual Knowledge Intensive Queries (2025)
- CLIP-UP: CLIP-Based Unanswerable Problem Detection for Visual Question Answering (2025)
- V2C-CBM: Building Concept Bottlenecks with Vision-to-Concept Tokenizer (2025)
- Analyzing and Boosting the Power of Fine-Grained Visual Recognition for Multi-modal Large Language Models (2025)
- GePBench: Evaluating Fundamental Geometric Perception for Multimodal Large Language Models (2024)
- Exploring Advanced Techniques for Visual Question Answering: A Comprehensive Comparison (2025)
- Language Models Can See Better: Visual Contrastive Decoding For LLM Multimodal Reasoning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper