Diagnostic hypothesis-generation processes are ubiquitous in human reasoning. For example, clinicians generate disease hypotheses to explain symptoms and help guide treatment, auditors generate hypotheses for identifying sources of accounting errors, and laypeople generate hypotheses to explain patterns of information (i.e., data) in the environment. The authors introduce a general model of human judgment aimed at describing how people generate hypotheses from memory and how these hypotheses serve as the basis of probability judgment and hypothesis testing. In 3 simulation studies, the authors illustrate the properties of the model, as well as its applicability to explaining several common findings in judgment and decision making, including how errors and biases in hypothesis generation can cascade into errors and biases in judgment.
The theory of probabilistic mental models (PMM; G. Gigerenzer, U. Hoffrage, & H. Kleinbölting, 1991) has had a major influence on the field of judgment and decision making, with the most recent important modifications to PMM theory being the identification of several fast and frugal heuristics (G. Gigerenzer & D. G. Goldstein, 1996). These heuristics were purported to provide psychologically plausible cognitive process models that describe a variety of judgment behavior. In this article, the authors evaluate the psychological plausibility of the assumptions upon which PMM were built and, consequently, the psychological plausibility of several of the fast and frugal heuristics. The authors argue that many of PMM theory's assumptions are questionable, given available data, and that fast and frugal heuristics are, in fact, psychologically implausible.
The question of whether computerized cognitive training leads to generalized improvements of intellectual abilities has been a popular, yet contentious, topic within both the psychological and neurocognitive literatures. Evidence for the effective transfer of cognitive training to nontrained measures of cognitive abilities is mixed, with some studies showing apparent successful transfer, while others have failed to obtain this effect. At the same time, several authors have made claims about both successful and unsuccessful transfer effects on the basis of a form of responder analysis, an analysis technique that shows that those who gain the most on training show the greatest gains on transfer tasks. Through a series of Monte Carlo experiments and mathematical analyses, we demonstrate that the apparent transfer effects observed through responder analysis are illusory and are independent of the effectiveness of cognitive training. We argue that responder analysis can be used neither to support nor to refute hypotheses related to whether cognitive training is a useful intervention to obtain generalized cognitive benefits. We end by discussing several proposed alternative analysis techniques that incorporate training gain scores and argue that none of these methods are appropriate for testing hypotheses regarding the effectiveness of cognitive training.
This study investigated the relationships between cognitive ability (as assessed by the Raven Progressive Matrices Test [RPM] and the Visual-Span Test [VSPAN]) and individuals' performance in three dynamic decision making (DDM) tasks (i.e., regular Water Purification Plant [WPP], Team WPP, and Firechief). Participants interacted repeatedly with one of the three microworlds. Our results indicate a positive association between VSPAN and RPM scores and between each of those measures and performance in the three dynamic tasks. Practice had no effect on the correlation between RPM score and performance in any of the microworlds, but it led to an increased correlation between VSPAN score and performance in Team WPP. The pattern of associations between performance in microworlds and assessments of cognitive ability was consistent with the task requirements of the microworlds. These findings provide insight into the cognitive demands of dynamic decision making and the dynamics of the relationships between cognitive ability and performance with task practice.
Noisy recordings of dialogue often serve as evidence in criminal proceedings. The present article explores the ability of two types of contextual information, currently present in the legal system, to bias subjective interpretations of such evidence. The present experiments demonstrate that the general context of the legal system and the presence of transcripts of the recorded speech are both able to bias interpretations of degraded & benign recordings into interpretable & incriminating. Furthermore we demonstrate a curse of knowledge whereby people become miscalibrated to the true quality of degraded recordings when provided transcripts. Current methods of dealing with auditory evidence are insufficient to mollify the effects of biasing information within the criminal justice system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.