Human Theory of Mind enables us to attribute mental states like beliefs and desires based on how other people act. However, in many social interactions (particularly ones that lack observable action), people also directly think about other people's thinking. Here we present a computational framework, Bayesian inverse reasoning, for thinking about other people's thoughts. Our framework formalizes inferences about thinking by inferring a generative model of reasoning decisions and computational processes, structured around a principle of rational mental effort --- the idea that people expect other agents to allocate thinking rationally. We show that this model quantitatively predicts human judgements in a task where participants must infer the mental causes behind an agent's pauses as they navigate and solve a maze. Our results contribute to our understanding of the richness of the human ability to think about other minds, and to even think about thinking itself.
How veridical is perception? Rather than representing objects as they actually exist in the world, might perception instead represent objects only in terms of the utility they offer to an observer? Previous work employed evolutionary modeling to show that under certain assumptions, natural selection favors such “strict‐interface” perceptual systems. This view has fueled considerable debate, but we think that discussions so far have failed to consider the implications of two critical aspects of perception. First, while existing models have explored single utility functions, perception will often serve multiple largely independent goals. (Sometimes when looking at a stick you want to know how appropriate it would be as kindling for a campfire, and other times you want to know how appropriate it would be as a weapon for self‐defense.) Second, perception often operates in an inflexible, automatic manner—proving “impenetrable” to shifting higher‐level goals. (When your goal shifts from “burning” to “fighting,” your visual experience does not dramatically transform.) These two points have important implications for the veridicality of perception. In particular, as the need for flexible goals increases, inflexible perceptual systems must become more veridical. We support this position by providing evidence from evolutionary simulations that as the number of independent utility functions increases, the distinction between “interface” and “veridical” perceptual systems dissolves. Although natural selection evaluates perceptual systems only on their fitness, the most fit perceptual systems may nevertheless represent the world as it is.
Can non-human primates (NHPs) represent other minds? Answering this question has been historically difficult because primates can fail experimental tasks due to a lack of motivation, or succeed through simpler mechanisms. Here we introduce a computational approach for comparative cognition that enables us to quantitatively test the explanatory power of competing accounts. We formalized a collection of theories of NHP social cognition with varying representational complexity and compared them against data from classical NHP studies, focusing on the ability to determine what others know based on what they see. Our results uncovered that, while the most human-like models of NHP social cognition make perfect qualitative predictions, they predict effect sizes that are too strong to be plausible. Instead, theories of intermediate representational complexity best explained the data. At the same time, we show that it is possible for human-like models to capture non-human primate behavior (NHP), as long as we assume that NHPs rely on these representations only about one third of the time. These results show that, in visual perspective taking tasks, NHPs likely draw upon simpler social representations than humans, either in terms of representational complexity, or in terms of use.
In contrast to object recognition models, humans do not blindly trust their perception when building representations of the world, instead recruiting metacognition to detect percepts that are unreliable or false, such as when we realize that we mistook one object for another. We propose METAGEN, an unsupervised model that enhances object recognition models through a metacognition. Given noisy output from an object-detection model, METAGEN learns a meta-representation of how its perceptual system works and uses it to infer the objects in the world responsible for the detections. METAGEN achieves this by conditioning its inference on basic principles of objects that even human infants understand (known as Spelke principles: object permanence, cohesion, and spatiotemporal continuity). We test METAGEN on a variety of state-of-the-art object detection neural networks. We find that METAGEN quickly learns an accurate metacognitive representation of the neural network, and that this improves detection accuracy by filling in objects that the detection model missed and removing hallucinated objects. This approach enables generalization to out-of-sample data and outperforms comparison models that lack a metacognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.