"We explore the intriguing possibility that theory of mind (ToM), or the uniquely human ability to impute unobservable mental states to others, might have spontaneously emerged in large language models (LLMs). We designed 40 false-belief tasks, considered a gold standard in testing ToM in humans, and administered them to several LLMs. Each task included a false-belief scenario, three closely matched true-belief controls, and the reversed versions of all four. Smaller and older models solved no tasks; GPT-three-davinci-zero-zero-one (from May 2020) and GPT-three-davinci-zero-zero-two (from January 2022) solved 10%; and GPT-three-davinci-zero-zero-three (from November 2022) and ChatGPT-three-point-five-turbo (from March 2023) solved 35% of the tasks, mirroring the performance of three-year-old children. ChatGPT-four (from June 2023) solved 90% of the tasks, matching the performance of seven-year-old children. These findings suggest the intriguing possibility that ToM, previously considered exclusive to humans, may have spontaneously emerged as a byproduct of LLMs' improving language skills." |