Diagnostic error is commonly multifactorial in origin, typically involving both system-related and cognitive factors. The results identify the dominant problems that should be targeted for additional research and early reduction; they also further the development of a comprehensive taxonomy for classifying diagnostic errors.
This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.
Memory distortions sometimes serve a purpose: It may be in our interest to misremember some details of an event or to forget others altogether. The present work examines whether a similar phenomenon occurs for source attribution. Given that the source of a memory provides information about the accuracy of its content, people may be biased toward source attributions that are consistent with desired accuracy. In Experiment 1, participants read desirable and undesirable predictions made by sources differing in their a priori reliability and showed a wishful thinking bias-that is, a bias to attribute desirable predictions to the reliable source and undesirable predictions to the unreliable source. Experiment 2 showed that this wishful thinking effect depends on retrieval processes. Experiment 3 showed that under some circumstances, wishes concerning one event can produce systematic source memory errors for others.
When making source attributions, people tend to attribute desirable statements to reliable sources and undesirable statements to unreliable sources, a phenomenon known as the wishful thinking effect (Gordon, Franklin, & Beck, 2005). In the present study, we examined the influence of wishful thinking on source monitoring for self-relevant information. On one hand, wishful thinking is expected, because self-relevant desires are presumably strong. However, self-relevance is known to confer a memory advantage and may thus provide protection from desire-based biases. In Experiment 1, source memory for self-relevant information was contrasted against source memory for information relevant to others and for neutral information. Results indicated that self-relevant information was affected by wishful thinking and was remembered more accurately than was other information. Experiment 2 showed that the magnitude of the self-relevant wishful thinking effect did not increase with a delay.
Understanding the development of public opinion about emerging technologies, when the scope of that emergence is still speculative, poses particular challenges. Opinions and beliefs may be drawn from conflicting experts in multiple fields, media portrayals with varying biases, and fictional narratives that portray diverse possible futures. This article draws on research in cognitive and social psychology to discuss how fiction in particular may influence beliefs about emerging technologies such as nanotechnology and biotechnology. Fiction can affect beliefs about the developments that are most likely, the relative weight of possible risks and benefits, and the desirability of potential technology-related outcomes. These beliefs, in turn, influence public support of regulation and funding, sometimes in ways that have little to do with the actual issues immediately at hand.Public opinion on science and technology draws from many sources. Most often, it includes an important component: technology's perceived practical results for everyday life. In some cases, however, this component is by necessity minimal. Emerging technologies, such as nanotechnology or high-level biotechnology, are hardly without usable (and in the latter case ubiquitous) results. Nevertheless, the perception of much of the public is that their most dramatic developments are yet to come-and opinion on them is strongly tied to beliefs about what form those developments will take.Studies examining the formation of attitudes toward developing technologies can give wildly varying results depending on the methodology used. Nanotechnology is a good case study. Research in this area has often been limited by the general population's ignorance of its parameters. A common strategy, therefore, has been to educate a sample of the population prior to surveying their opinions of the subject. This tack was taken by both Nanojury UK (2005) and the Woodrow Wilson Center Survey (Macoubrie, 2005). Both studies found largely positive attitudes toward the new technology, along with strong concerns about the need for oversight and regulation.Scheufele (2008), by contrast, found that only 29.5% of Americans viewed nanotechnology as morally acceptable. The concerns of the remaining 70.5% were based largely on potential dramatic developments in human enhancement. Meanwhile, current research in nanotechnology produces advances in sunscreen and odor-resistant socks (Project on Emerging Technologies, 2008). Those who expressed their disapproval in the survey may well use nanotechnological materials in their day-to-day lives. This study, however, was notable for the fact that the participants' attitudes were not influenced by experimenter-created materials.Similarly, strong negative attitudes about biotechnology often focus on potential future innovations such as human cloning (Turnpenny, 2005). These attitudes can persist even when people become better informed about the current status of research and are based on factors other than that awareness (Nisbett, 2005). So wher...
Unfortunately, general limits in cognitive performance extend to diagnostic situations. The authors remain optimistic about reducing cognition-based error, but not as optimistic as Croskerry (see his accompanying article), since the reality is that predictable patterns of error will persist.
This study compared explicit and behavioural measures of source credibility judgements based on two factors: a source's past record of accuracy and its production of predictions that participants would like to believe. The former is considered to be a rational factor for judging credibility, while the latter is considered nonrational (i.e., it does not predict actual credibility). In Experiments 1 and 2, participants saw an equal number of predictions from two sources, one of which was either highly or slightly more accurate/desirable than the other. In Experiment 3, either one source was high accuracy and the other high desirability, or one source was higher on both measures. For all experiments, participants then saw new accurate and inaccurate predictions and said which source they thought was most likely to produce each (behavioural task). Participants then gave a percentage rating for each source's perceived accuracy (explicit judgement task). Participants showed sensitivity to past accuracy differences using both tasks, but not to the size of the differences. Desirability influenced performance only on the behavioural task. However, when the two factors conflicted, participants responded solely using past accuracy information. Behaviours reflect source credibility judgements based on both rational and irrational factors, but participants appear to be both more strongly influenced by the rational factor and more aware of that influence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.