A normative framework for modeling causal and counterfactual reasoning has been proposed by Spirtes, Glymour, and Scheines (1993; cf. Pearl, 2000). The framework takes as fundamental that reasoning from observation and intervention differ. Intervention includes actual manipulation as well as counterfactual manipulation of a model via thought. To represent intervention, Pearl employed the do operator that simplifies the structure of a causal model by disconnecting an intervened-on variable from its normal causes. Construing the do operator as a psychological function affords predictions about how people reason when asked counterfactual questions about causal relations that we refer to as undoing, a family of effects that derive from the claim that intervened-on variables become independent of their normal causes. Six studies support the prediction for causal (A causes B) arguments but not consistently for parallel conditional (if A then B) ones. Two of the studies show that effects are treated as diagnostic when their values are observed but nondiagnostic when they are intervened on. These results cannot be explained by theories that do not distinguish interventions from other sorts of events.
Can people learn causal structure more effectively through intervention rather than observation? Four studies used a trial-based learning paradigm in which participants obtained probabilistic data about a causal chain through either observation or intervention and then selected the causal model most likely to have generated the data. Experiment 1 demonstrated that interveners made more correct model choices than did observers, and Experiments 2 and 3 ruled out explanations for this advantage in terms of informational differences between the 2 conditions. Experiment 4 tested the hypothesis that the advantage was driven by a temporal signal; interveners may exploit the cue that their interventions are the most likely causes of any subsequent changes. Results supported this temporal cue hypothesis.
When agents violate norms, they are typically judged to be more of a cause of resulting outcomes. In this paper, we suggest that norm violations also affect the causality attributed to other agents, a phenomenon we refer to as "causal superseding." We propose and test a counterfactual reasoning model of this phenomenon in four experiments. Experiments 1 and 2 provide an initial demonstration of the causal superseding effect and distinguish it from previously studied effects. Experiment 3 shows that this causal superseding effect is dependent on a particular event structure, following a prediction of our counterfactual model. Experiment 4 demonstrates that causal superseding can occur with violation of non-moral norms. We propose a model of the superseding effect based on the idea of counterfactual sufficiency.
How do people attribute responsibility in situations where the contributions of multiple agents combine to produce a joint outcome? The prevalence of over-determination in such cases makes this a difficult problem for counterfactual theories of causal responsibility. In this article, we explore a general framework for assigning responsibility in multiple agent contexts. We draw on the structural model account of actual causation (e.g., Halpern & Pearl, 2005) and its extension to responsibility judgments (Chockler & Halpern, 2004). We review the main theoretical and empirical issues that arise from this literature and propose a novel model of intuitive judgments of responsibility. This model is a function of both pivotality (whether an agent made a difference to the outcome) and criticality (how important the agent is perceived to be for the outcome, before any actions are taken). The model explains empirical results from previous studies and is supported by a new experiment that manipulates both pivotality and criticality. We also discuss possible extensions of this model to deal with a broader range of causal situations. Overall, our approach emphasizes the close interrelations between causality, counterfactuals, and responsibility attributions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.