When participants assess the relationship between two variables, each with levels of presence and absence, the two most robust phenomena are that: (a) observing the joint presence of the variables has the largest impact on judgment and observing joint absence has the smallest impact, and (b) participants' prior beliefs about the variables' relationship influence judgment. Both phenomena represent departures from the traditional normative model (the phi coefficient or related measures) and have therefore been interpreted as systematic errors. However, both phenomena are consistent with a Bayesian approach to the task. From a Bayesian perspective: (a) joint presence is normatively more informative than joint absence if the presence of variables is rarer than their absence, and (b) failing to incorporate prior beliefs is a normative error. Empirical evidence is reported showing that joint absence is seen as more informative than joint presence when it is clear that absence of the variables, rather than their presence, is rare. Ó 2006 Elsevier Inc. All rights reserved.Keywords: Covariation assessment; Rationality; Bayesian inference Although reasoning and decision making errors are often reported (e.g., Evans, Newstead, & Byrne, 1993; Gilovich, Griffin, & Kahneman, 2002; Kahneman & Tversky, 2000), they are often disputed as well. For example, sometimes it is argued that partici- pants construe tasks differently than experimenters (Hilton, 1995;Schwarz, 1996), that many errors are limited to (or at least exacerbated by) the laboratory environment (Anderson, 1990(Anderson, , 1991Klayman & Ha, 1987;McKenzie, 2003McKenzie, , 2004aMcKenzie & Mikkelsen, 2000;McKenzie & Nelson, 2003;Oaksford & Chater, 1994, 2003, and that some purported errors are consistent with an alternative normative standard (Anderson, 1990(Anderson, , 1991Chase, Hertwig, & Gigerenzer, 1998;Gigerenzer, 1991Gigerenzer, , 1996Gigerenzer et al., 1999;McKenzie, 2004a; Sher & McKenzie, in press;Oaksford & Chater, 1994, 2003. In this article, we invoke all of the above arguments to explain robust ''errors'' in covariation assessment. Assessing how variables covary underlies such fundamental behaviors as learning (Hilgard & Bower, 1975), categorization (Smith & Medin, 1981), and judging causation (Cheng, 1997;Cheng & Novick, 1990Einhorn & Hogarth, 1986), to name just a few. Crocker (1981) noted that people's ability to accurately assess covariation allows them to explain the past, control the present, and predict the future. It is hard to imagine a more important cognitive activity and, accordingly, much research has been devoted to this topic since the groundbreaking studies of Inhelder and Piaget (1958) and Smedslund (1963; for reviews, see Allan, 1993;McKenzie, 1994).Despite the important role that covariation assessment plays in people's daily lives, most research over the last four decades examining performance with two binary variables-presumably the simplest possible case-has concluded that people are surprisingly poor at the task. Two robust...
When participants assess the relationship between two variables, each with levels of presence and absence, the two most robust phenomena are that: (a) observing the joint presence of the variables has the largest impact on judgment and observing joint absence has the smallest impact, and (b) par-ticipants' prior beliefs about the variables' relationship influence judgment. Both phenomena represent departures from the traditional normative model (the phi coefficient or related measures) and have therefore been interpreted as systematic errors. However, both phenomena are consistent with a Bayesian approach to the task. From a Bayesian perspective: (a) joint presence is normatively more informative than joint absence if the presence of variables is rarer than their absence, and (b) failing to incorporate prior beliefs is a normative error. Empirical evidence is reported showing that joint absence is seen as more informative than joint presence when it is clear that absence of the variables, rather than their presence, is rare.
People often test hypotheses about two variables (X and Y), each with two levels (e.g., Xl and X2). When testing "IfXl, then YI,~observing the conjunction ofXl and Y1 is overwhelmingly perceived as more supportive than observing the conjunction ofX2 and Y2, although both observations support the hypothesis. Normatively, the X2&Y2 observation provides stronger support than the XI&YI observation if the former is rarer. Because participants in laboratory settings typically test hypotheses they are unfamiliar with, previous research has not examined whether participants are sensitive to the rarity of observations. The experiment reported here showed that participants were sensitive to rarity, even judging a rare X2&Y2 observation more supportive than a common XI&YI observation under certain conditions. Furthermore, participants' default strategy ofjudgingXI&YI observations more informative might be generally adaptive because hypotheses usually regard rare events.A fundamental issue in the study of human inference is what, psychologically speaking, constitutes confirmatory evidence for a hypothesis. In 1945, philosopher Carl Hempel noted the following paradox regarding confirmatory evidence. Assume that the hypothesis of interest is "All ravens are black." This statement can be rewritten as "If something is a raven, then it is black," or RavenB lack. Clearly, observing a black raven would count as confirming evidence. Similarly, if the hypothesis were "If something is not black, then it is not a raven," or -Black~-Raven, observing a nonblack nonraven (e.g., a white shoe or a yellow pencil) would clearly be confirming evidence. Because these two hypotheses are logically equivalent (one is the contrapositive ofthe other), any evidence that confirms one must confirm the other. It follows, then, that observing a nonblack nonraven confirms Raven~Black. Thus, one could apparently confirm the hypothesis about the color of ravens by sitting in one's office and never even observing a raven. Most people find this highly counterintuitive; hence, the paradox.Other philosophers have pointed out that the paradox can be resolved if one conceives of confirmation as a matter of degree rather than all-or-none. Although black ravens and nonblack nonravens both confirm RavenB lack, they do not do so equally strongly. From a Bayesian perspective, confirming evidence supports a hypothesis to the extent that it is rare, or surprising. Because nonblack things and nonravens are both common, observing a nonblack nonraven would not be unusual and would therefore confirm the hypothesis only negligibly. In contrast, because few things are black and few things are ravens, observing a black raven would be surprising and would constitute stronger confirmation (Alexander, 1958;Good, 1960;Horwich, 1982;Hosiasson-Lindenbaum, 1940; Howson & Urbach, 1989,pp. 88-91;Mackie, 1963).1 The paradox appears to stem from our inability to distinguish intuitively between nonconfirmatory and minutely confirmatory evidence: The nonblack nonraven appears completely u...
When testing hypotheses, rare or unexpected observations are normatively more informative than common observations, and recent studies have shown that participants' behavior reflects this principle. Research has also shown that, when asked to test conditional hypotheses ("If X, then Y") that are abstract or unfamiliar, participants overwhelmingly consider a supporting observation mentioned in the hypothesis (X&Y ) to be more informative than a supporting observation not mentioned (ϳX&ϳY ). These two empirical findings would mesh well if conditional hypotheses tend to be phrased in terms of rare, rather than common, events. Six experiments are reported indicating that people do have a tendency-often a very strong one-to phrase conditional hypotheses in terms of rare events. Thus, observations mentioned in conditional hypotheses might generally be considered highly informative because they usually are highly informative. ᭧ 2001 Academic PressIn the context of Bayesian hypothesis testing, data are informative to the extent that they are unexpected, or rare (e.g., Horwich, 1982;Howson & Urbach, 1989). Consider testing a forecaster's claim of being able to predict the weather in San Diego, where rainy days are rare. Which outcome would leave you more convinced of the forecaster's ability, a correct prediction of rain or a correct prediction of no rain? The correct prediction of rain should impress you more. Address correspondence and reprint requests to Craig McKenzie, Department of Psychology, University of California, San Diego, La Jolla, CA 92093-0109. E-mail: cmckenzie@ucsd.edu.1 To illustrate the normative principle, consider testing the forecaster's claim that he or she can predict the weather: "If I predict rain, then it will rain," which we take to mean that the forecaster can correctly predict the days it will and will not rain. Does a correct prediction of rain or a correct prediction of no rain constitute stronger evidence for his or her claim? For the sake of concreteness, assume that the base rate of rain is 5%, or p(rain) ϭ .05 and p(no rain) ϭ .95. Assume further 291
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.