Papers
arxiv:2406.17513

Benchmarking Mental State Representations in Language Models

Published on Jun 25
· Submitted by mb22222 on Jun 28
Authors:
,

Abstract

While numerous works have assessed the generative performance of language models (LMs) on tasks requiring Theory of Mind reasoning, research into the models' internal representation of mental states remains limited. Recent work has used probing to demonstrate that LMs can represent beliefs of themselves and others. However, these claims are accompanied by limited evaluation, making it difficult to assess how mental state representations are affected by model design and training choices. We report an extensive benchmark with various LM types with different model sizes, fine-tuning approaches, and prompt designs to study the robustness of mental state representations and memorisation issues within the probes. Our results show that the quality of models' internal representations of the beliefs of others increases with model size and, more crucially, with fine-tuning. We are the first to study how prompt variations impact probing performance on theory of mind tasks. We demonstrate that models' representations are sensitive to prompt variations, even when such variations should be beneficial. Finally, we complement previous activation editing experiments on Theory of Mind tasks and show that it is possible to improve models' reasoning performance by steering their activations without the need to train any probe.

Community

Paper author Paper submitter

Our paper has been accepted to the ICML 2024 Workshop on Mechanistic Interpretability! We explore how well LMs of different sizes, architectures and fine-tuning represent mental states. We also use contrastive activation addition to improve LMs performance and generalisability on theory of mind tasks, all without the need to train anything!

Paper: https://arxiv.org/pdf/2406.17513
Code: https://git.hcics.simtech.uni-stuttgart.de/public-projects/mental-states-in-LMs

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.17513 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.17513 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.17513 in a Space README.md to link it from this page.

Collections including this paper 1