Interacting with a system is key to uncovering its causal structure. A computational framework for interventional causal learning has been developed over the last decade, but how real causal learners might achieve or approximate the computations entailed by this framework is still poorly understood. Here we describe an interactive computer task in which participants were incentivized to learn the structure of probabilistic causal systems through free selection of multiple interventions. We develop models of participants' intervention choices and online structure judgments, using expected utility gain, probability gain, and information gain and introducing plausible memory and processing constraints. We find that successful participants are best described by a model that acts to maximize information (rather than expected score or probability of being correct); that forgets much of the evidence received in earlier trials; but that mitigates this by being conservative, preferring structures consistent with earlier stated beliefs. We explore 2 heuristics that partly explain how participants might be approximating these models without explicitly representing or updating a hypothesis space.
Higher-level cognition depends on the ability to learn models of the world. We can characterize this at the computational level as a structure-learning problem with the goal of best identifying the prevailing causal relationships among a set of relata. However, the computational cost of performing exact Bayesian inference over causal models grows rapidly as the number of relata increases. This implies that the cognitive processes underlying causal learning must be substantially approximate. A powerful class of approximations that focuses on the sequential absorption of successive inputs is captured by the Neurath's ship metaphor in philosophy of science, where theory change is cast as a stochastic and gradual process shaped as much by people's limited willingness to abandon their current theory when considering alternatives as by the ground truth they hope to approach. Inspired by this metaphor and by algorithms for approximating Bayesian inference in machine learning, we propose an algorithmic-level model of causal structure learning under which learners represent only a single global hypothesis that they update locally as they gather evidence. We propose a related scheme for understanding how, under these limitations, learners choose informative interventions that manipulate the causal system to help elucidate its workings. We find support for our approach in the analysis of three experiments.
Children between 5 and 8 years of age freely intervened on a three-variable causal system, with their task being to discover whether it was a common cause structure or one of two causal chains. From 6 or 7 years of age, children were able to use information from their interventions to correctly disambiguate the structure of a causal chain. We used a Bayesian model to examine children's interventions on the system; this showed that with development children became more efficient in producing the interventions needed to disambiguate the causal structure and that the quality of interventions, as measured by their informativeness, improved developmentally. The latter measure was a significant predictor of children's correct inferences about the causal structure. A second experiment showed that levels of performance were not reduced in a task where children did not select and carry out interventions themselves, indicating no advantage for self-directed learning. However, children's performance was not related to intervention quality in these circumstances, suggesting that children learn in a different way when they carry out interventions themselves.
Many aspects of our physical environment are hidden. For example, it is hard to estimate how heavy an object is from visual observation alone. In this paper we examine how people actively "experiment" within the physical world to discover such latent properties. In the first part of the paper, we develop a novel framework for the quantitative analysis of the information produced by physical interactions. We then describe two experiments that present participants with moving objects in "microworlds" that operate according to continuous spatiotemporal dynamics similar to everyday physics (i.e., forces of gravity, friction, etc.). Participants were asked to interact with objects in the microworlds in order to identify their masses, or the forces of attraction/repulsion that governed their movement. Using our modeling framework, we find that learners who freely interacted with the physical system selectively produced evidence that revealed the physical property consistent with their inquiry goal. As a result, their inferences were more accurate than for passive observers and, in some contexts, for yoked participants who watched video replays of an active learner's interactions. We characterize active learners' actions into a range of micro-experiment strategies and discuss how these might be learned or generalized from past experience. The technical contribution of this work is the development of a novel analytic framework and methodology for the study of interactively learning about the physical world. Its empirical contribution is the demonstration of sophisticated goal directed human active learning in a naturalistic context.
One remarkable aspect of human cognition is our ability to reason about physical events. This article provides novel evidence that intuitive physics is subject to a peculiar error, the classic conjunction fallacy, in which people rate the probability of a conjunction of two events as more likely than one constituent (a logical impossibility). Participants viewed videos of physical scenarios and judged the probability that either a single event or a conjunction of two events would occur. In Experiment 1 ( n = 60), participants consistently rated conjunction events as more likely than single events for the same scenes. Experiment 2 ( n = 180) extended these results to rule out several alternative explanations. Experiment 3 ( n = 100) generalized the finding to different scenes. This demonstration of conjunction errors contradicts claims that such errors should not appear in intuitive physics and presents a serious challenge to current theories of mental simulation in physical reasoning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.