Leading accounts of judgment under uncertainty evaluate performance within purely statistical frameworks, holding people to the standards of classical Bayesian (Tversky & Kahneman, 1974) or frequentist (Gigerenzer & Hoffrage, 1995) norms. We argue that these frameworks have limited ability to explain the success and flexibility of people's real-world judgments, and propose an alternative normative framework based on Bayesian inferences over causal models. Deviations from traditional norms of judgment, such as "base-rate neglect", may then be explained in terms of a mismatch between the statistics given to people and the causal models they intuitively construct to support probabilistic reasoning. Four experiments show that when a clear mapping can be established from given statistics to the parameters of an intuitive causal model, people are more likely to use the statistics appropriately, and that when the classical and causal Bayesian norms differ in their prescriptions, people's judgments are more consistent with causal Bayesian norms.
Causality in Judgment 3The Role of Causality in Judgment Under Uncertainty Everywhere in life, people are faced with situations that require intuitive judgments of probability. How likely is it that this person is trustworthy? That this meeting will end on time?That this pain in my side is a sign of a serious disease? Survival and success in the world depend on making judgments that are as accurate as possible given the limited amount of information that is often available. To explain how people make judgments under uncertainty, researchers typically invoke a computational framework to clarify the kinds of inputs, computations, and outputs that they expect people to use during judgment. We can view human judgments as approximations (sometimes better, sometimes worse) to modes of reasoning within a rational computational framework, where a computation is "rational" to the extent that it provides adaptive value in real-world tasks and environments. However, there is more than one rational framework for judgment under uncertainty, and behavior that looks irrational under one framework may look rational under a different framework. Because of this, evidence of "errorprone" behavior as judged by one framework may alternatively be viewed as evidence that a different rational framework is appropriate. This paper considers the question of which computational framework best explains people's judgments under uncertainty. To answer this, we must consider what kinds of realworld tasks and environments people encounter, which frameworks are best suited to these environments (i.e., which we should take to be normative), and how well these frameworks predict people's actual judgments under uncertainty (i.e., which framework offers the best descriptive model). We will propose that a causal Bayesian framework, in which Bayesian inferences are made over causal models, represents a more appropriate normative standard and a more accurate descriptive model than previous frameworks for judgme...