Large Language Models Think Too Fast To Explore Effectively
Abstract
Large Language Models have emerged many intellectual capacities. While numerous benchmarks assess their intelligence, limited attention has been given to their ability to explore, an essential capacity for discovering new information and adapting to novel environments in both natural and artificial systems. The extent to which LLMs can effectively explore, particularly in open-ended tasks, remains unclear. This study investigates whether LLMs can surpass humans in exploration during an open-ended task, using Little Alchemy 2 as a paradigm, where agents combine elements to discover new ones. Results show most LLMs underperform compared to humans, except for the o1 model, with those traditional LLMs relying primarily on uncertainty driven strategies, unlike humans who balance uncertainty and empowerment. Representational analysis of the models with Sparse Autoencoders revealed that uncertainty and choices are represented at earlier transformer blocks, while empowerment values are processed later, causing LLMs to think too fast and make premature decisions, hindering effective exploration. These findings shed light on the limitations of LLM exploration and suggest directions for improving their adaptability.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Do Language Models Understand the Cognitive Tasks Given to Them? Investigations with the N-Back Paradigm (2024)
- Virgo: A Preliminary Exploration on Reproducing o1-like MLLM (2025)
- Algorithmic Phase Transitions in Language Models: A Mechanistic Case Study of Arithmetic (2024)
- Enhancing Trust in Large Language Models with Uncertainty-Aware Fine-Tuning (2024)
- A Survey on Large Language Models with some Insights on their Capabilities and Limitations (2025)
- LLM2: Let Large Language Models Harness System 2 Reasoning (2024)
- Large language models for artificial general intelligence (AGI): A survey of foundational principles and approaches (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
thanks, quite interesting
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper