Causal selection is the cognitive process through which one or more elements in a complex causal structure are singled out as actual causes of a certain effect. In this paper, we report on an experiment in which we investigated the role of moral and temporal factors in causal selection. Our results are as follows. First, when presented with a temporal chain in which two human agents perform the same action one after the other, subjects tend to judge the later agent to be the actual cause. Second, the impact of temporal location on causal selection is almost canceled out if the later agent did not violate a norm while the former did. We argue that this is due to the impact that judgments of norm violation have on causal selection—even if the violated norm has nothing to do with the obtaining effect. Third, moral judgments about the effect influence causal selection even in the case in which agents could not have foreseen the effect and did not intend to bring it about. We discuss our findings in connection to recent theories of the role of moral judgment in causal reasoning, on the one hand, and to probabilistic models of temporal location, on the other.
A prominent finding in causal cognition research is people's tendency to attribute increased causality to atypical actions. If two agents jointly cause an outcome ("conjunctive causation"), but differ in how frequently they have performed the causal action before, people judge the atypically acting agent to have caused the outcome to a greater extent than the normally acting agent. In this paper, we argue that it is the epistemic state of an abnormally acting agent, rather than the abnormality of their action, that is driving people's causal judgments. Given the predictability of the normally acting agent's behaviour, the abnormal agent is in a better position to foresee the consequences of their action. We put this hypothesis to test in four experiments. In Experiment 1, we show that people judge the atypical agent as more causal than the normally acting agent, but also perceive an epistemic advantage of the abnormal agent. In Experiment 2, we find that people do not judge a causal difference if there is no epistemic asymmetry between the agents. In Experiment 3, we replicate these findings for a scenario in which the abnormal agent's epistemic advantage generalises to a novel context. In Experiment 4, we extend these findings to mental states more broadly construed. We develop a Bayesian network model that predicts the degree of mental states based on action normality and epistemic states, and find that people infer mental states like desire and intention to a greater extent from abnormal behaviour. We discuss these results in light of current theories and research on people's preference for atypical causes.
Did Tom’s use of nuts in the dish cause Billy’s allergic reaction? According to counterfactual theories of causation, an agent is judged a cause to the extent that their action made a difference to the outcome (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2020; Gerstenberg, Halpern, & Tenenbaum, 2015; Halpern, 2016; Hitchcock & Knobe, 2009). In this paper, we argue for the integration of epistemic states into current counterfactual accounts of causation. In the case of ignorant causal agents, we demonstrate that people’s counterfactual reasoning primarily targets the agent’s epistemic state – what the agent doesn’t know –, and their epistemic actions – what they could have done to know – rather than the agent’s actual causal action. In four experiments, we show that people’s causal judgment as well as their reasoning about alternatives is sensitive to the epistemic conditions of a causal agent: Knowledge vs. ignorance (Experiment 1), self-caused vs. externally caused ignorance (Experiment 2), the number of epistemic actions (Experiment 3), and the epistemic context (Experiment 4). We see two advantages in integrating epistemic states into causal models and counterfactual frameworks. First, assuming the intervention on indirect, epistemic causes might allow us to explain why people attribute decreased causality to ignorant vs. knowing causal agents. Moreover, causal agents’ epistemic states pick out those factors that can be controlled or manipulated in order to achieve desirable future outcomes, reflecting the forward-looking dimension of causality. We discuss our findings in the broader context of moral and causal cognition.
It has recently been argued that normative considerations play an important role in causal cognition. For instance, when an agent violates a moral rule and thereby produces a negative outcome, she will be judged to be much more of a cause of the outcome, compared to someone who performed the same action but did not violate a norm. While there is a substantial amount of evidence reporting these effects, it is still a matter of debate how this evidence is to be interpreted. In this paper, we engage with the three most influential classes of explanations, namely, (a) the Norm-Sensitive Cognitive Process View, (b) the Normative Concept View, and (c) the Pragmatics View. We will outline how these theories explain the empirical results and in what ways they differ. We conclude with a reflection on how well these strategies do overall and what questions they still leave unanswered.
What do we communicate with causal explanations? Upon being told, "E because C", one might learn that C and E both occurred, and perhaps that there is a causal relationship between C and E. In fact, causal explanations systematically disclose much more than this basic information. Here, we offer a communication-theoretic account of explanation that makes specific predictions about the kinds of inferences people draw from others' explanations. We test these predictions in a case study involving the role of norms and causal structure. In Experiment 1, we demonstrate that people infer the normality of a cause from an explanation when they know the underlying causal structure. In Experiment 2, we show that people infer the causal structure from an explanation if they know the normality of the cited cause. We find these patterns both for scenarios that manipulate the statistical and prescriptive normality of events. Finally, we consider how the communicative function of explanations, as highlighted in this series of experiments, may help to elucidate the distinctive roles that normality and causal structure play in causal judgment, paving the way toward a more comprehensive account of causal explanation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.