Objectives. Diagnostic errors are a known patient safety concern across all clinical settings, including the emergency department (ED). We conducted a systematic review to determine the most frequent diseases and clinical presentations associated with diagnostic errors (and resulting harms) in the ED, measure error and harm frequency, as well as assess causal factors. Methods. We searched PubMed®, Cumulative Index to Nursing and Allied Health Literature (CINAHL®), and Embase® from January 2000 through September 2021. We included research studies and targeted grey literature reporting diagnostic errors or misdiagnosis-related harms in EDs in the United States or other developed countries with ED care deemed comparable by a technical expert panel. We applied standard definitions for diagnostic errors, misdiagnosis-related harms (adverse events), and serious harms (permanent disability or death). Preventability was determined by original study authors or differences in harms across groups. Two reviewers independently screened search results for eligibility; serially extracted data regarding common diseases, error/harm rates, and causes/risk factors; and independently assessed risk of bias of included studies. We synthesized results for each question and extrapolated U.S. estimates. We present 95 percent confidence intervals (CIs) or plausible range (PR) bounds, as appropriate. Results. We identified 19,127 citations and included 279 studies. The top 15 clinical conditions associated with serious misdiagnosis-related harms (accounting for 68% [95% CI 66 to 71] of serious harms) were (1) stroke, (2) myocardial infarction, (3) aortic aneurysm and dissection, (4) spinal cord compression and injury, (5) venous thromboembolism, (6/7 – tie) meningitis and encephalitis, (6/7 – tie) sepsis, (8) lung cancer, (9) traumatic brain injury and traumatic intracranial hemorrhage, (10) arterial thromboembolism, (11) spinal and intracranial abscess, (12) cardiac arrhythmia, (13) pneumonia, (14) gastrointestinal perforation and rupture, and (15) intestinal obstruction. Average disease-specific error rates ranged from 1.5 percent (myocardial infarction) to 56 percent (spinal abscess), with additional variation by clinical presentation (e.g., missed stroke average 17%, but 4% for weakness and 40% for dizziness/vertigo). There was also wide, superimposed variation by hospital (e.g., missed myocardial infarction 0% to 29% across hospitals within a single study). An estimated 5.7 percent (95% CI 4.4 to 7.1) of all ED visits had at least one diagnostic error. Estimated preventable adverse event rates were as follows: any harm severity (2.0%, 95% CI 1.0 to 3.6), any serious harms (0.3%, PR 0.1 to 0.7), and deaths (0.2%, PR 0.1 to 0.4). While most disease-specific error rates derived from mainly U.S.-based studies, overall error and harm rates were derived from three prospective studies conducted outside the United States (in Canada, Spain, and Switzerland, with combined n=1,758). If overall rates are generalizable to all U.S. ED visits (130 million, 95% CI 116 to 144), this would translate to 7.4 million (PR 5.1 to 10.2) ED diagnostic errors annually; 2.6 million (PR 1.1 to 5.2) diagnostic adverse events with preventable harms; and 371,000 (PR 142,000 to 909,000) serious misdiagnosis-related harms, including more than 100,000 permanent, high-severity disabilities and 250,000 deaths. Although errors were often multifactorial, 89 percent (95% CI 88 to 90) of diagnostic error malpractice claims involved failures of clinical decision-making or judgment, regardless of the underlying disease present. Key process failures were errors in diagnostic assessment, test ordering, and test interpretation. Most often these were attributed to inadequate knowledge, skills, or reasoning, particularly in “atypical” or otherwise subtle case presentations. Limitations included use of malpractice claims and incident reports for distribution of diseases leading to serious harms, reliance on a small number of non-U.S. studies for overall (disease-agnostic) diagnostic error and harm rates, and methodologic variability across studies in measuring disease-specific rates, determining preventability, and assessing causal factors. Conclusions. Although estimated ED error rates are low (and comparable to those found in other clinical settings), the number of patients potentially impacted is large. Not all diagnostic errors or harms are preventable, but wide variability in diagnostic error rates across diseases, symptoms, and hospitals suggests improvement is possible. With 130 million U.S. ED visits, estimated rates for diagnostic error (5.7%), misdiagnosis-related harms (2.0%), and serious misdiagnosis-related harms (0.3%) could translate to more than 7 million errors, 2.5 million harms, and 350,000 patients suffering potentially preventable permanent disability or death. Over two-thirds of serious harms are attributable to just 15 diseases and linked to cognitive errors, particularly in cases with “atypical” manifestations. Scalable solutions to enhance bedside diagnostic processes are needed, and these should target the most commonly misdiagnosed clinical presentations of key diseases causing serious harms. New studies should confirm overall rates are representative of current U.S.-based ED practice and focus on identified evidence gaps (errors among common diseases with lower-severity harms, pediatric ED errors and harms, dynamic systems factors such as overcrowding, and false positives). Policy changes to consider based on this review include: (1) standardizing measurement and research results reporting to maximize comparability of measures of diagnostic error and misdiagnosis-related harms; (2) creating a National Diagnostic Performance Dashboard to track performance; and (3) using multiple policy levers (e.g., research funding, public accountability, payment reforms) to facilitate the rapid development and deployment of solutions to address this critically important patient safety concern.
This qualitative study describes how respected hospitalists think about excellence in clinical care in hospital medicine. Their perspectives can be used to guide continuing medical education, so that offered programs can pay attention to enhancing the skills of learners so they can develop towards excellence, rather than using only competence as the desired target objective.
OBJECTIVES To develop and validate a new inpatient satisfaction metric to assess patients' perceptions of hospitalist performance. PATIENTS AND METHODS We developed the Tool to Assess Inpatient Satisfaction with Care from Hospitalists (TAISCH) by building upon the theoretical underpinnings of the quality of care measures that the Society of Hospital Medicine endorses. TAISCH was completed by inpatients at an academic institution between September 2012 and December 2012 after they had been cared for by the same hospitalist provider for at least 2 consecutive days. Content, internal structure, and convergent/discriminant validity evidence were assessed for TAISCH. RESULTS A total of 203 patients each rated 1 of our 29 hospitalists (patient response rate: 88%). Factor analyses resulted in a single factor with 15 items. Reliability of TAISCH was good (Cronbach's α = .88). The hospitalists' average TAISCH score ranged from 3.25 to 4.28 (mean [standard deviation] = 3.82 [0.24]; possible score range: 1–5). The relationship between TAISCH with a validated empathy scale and a global provider satisfaction question revealed significant positive associations (β = 12.2, and β = 11.2 respectively, both P < 0.001). At the provider level, no significant correlation was noted between the Press Ganey Physician score and TAISCH (r = 0.91, P = 0.51). CONCLUSION TAISCH collects patient satisfaction data that are attributable to specific hospitalist providers. The timeliness of the TAISCH data collection also makes real‐time service recovery possible, which is unachievable with other commonly used patient satisfaction metrics. Journal of Hospital Medicine 2014;9:553–558. © 2014 Society of Hospital Medicine
DENT serves as a paid consultant, reviewing medicolegal cases for both plaintiff and defense firms related to misdiagnosis of neurologic conditions, including dizziness and stroke. He has conducted government and foundation funded research related to diagnostic error, dizziness, and stroke. He has been loaned research equipment related to diagnosis of dizziness and stroke by two commercial companies (GN Otometrics and Interacoustics) and Johns Hopkins has licensed related diagnostic decision-support technology to GN Otometrics for which DENT receives royalties.
Patients with delirium admitted to non-teaching hospitals had comparable clinical and process outcomes achieved at lower costs. Further research can be conducted to explore the contextual issues and reasons for these differences in healthcare costs.
OBJECTIVE: To establish a metric for evaluating hospitalists’ documentation of clinical reasoning in admission notes. STUDY DESIGN: Retrospective study. SETTING: Admissions from 2014 to 2017 at three hospitals in Maryland. PARTICIPANTS: Hospitalist physicians. MEASUREMENTS: A subset of patients admitted with fever, syncope/dizziness, or abdominal pain were randomly selected. The nine-item Clinical Reasoning in Admission Note Assessment & Plan (CRANAPL) tool was developed to assess the comprehensiveness of clinical reasoning documented in the assessment and plans (A&Ps) of admission notes. Two authors scored all A&Ps by using this tool. A&Ps with global clinical reasoning and global readability/clarity measures were also scored. All data were deidentified prior to scoring. RESULTS: The 285 admission notes that were evaluated were authored by 120 hospitalists. The mean total CRANAPL score given by both raters was 6.4 (SD 2.2). The intraclass correlation measuring interrater reliability for the total CRANAPL score was 0.83 (95% CI, 0.76-0.87). Associations between the CRANAPL total score and global clinical reasoning score and global readability/clarity measures were statistically significant (P < .001). Notes from academic hospitals had higher CRANAPL scores (7.4 [SD 2.0] and 6.6 [SD 2.1]) than those from the community hospital (5.2 [SD 1.9]), P < .001. CONCLUSIONS: This study represents the first step to characterizing clinical reasoning documentation in hospital medicine. With some validity evidence established for the CRANAPL tool, it may be possible to assess the documentation of clinical reasoning by hospitalists.
Most hospitalists in our study felt that TAISCH provided meaningful feedback.
Objectives Diagnostic errors are pervasive in medicine and most often caused by clinical reasoning failures. Clinical presentations characterized by nonspecific symptoms with broad differential diagnoses (e.g., dizziness) are especially prone to such errors. Methods We hypothesized that novice clinicians could achieve proficiency diagnosing dizziness by training with virtual patients (VPs). This was a prospective, quasi-experimental, pretest-posttest study (2019) at a single academic medical center. Internal medicine interns (intervention group) were compared to second/third year residents (control group). A case library of VPs with dizziness was developed from a clinical trial (AVERT-NCT02483429). The approach (VIPER – Virtual Interactive Practice to build Expertise using Real cases) consisted of brief lectures combined with 9 h of supervised deliberate practice. Residents were provided dizziness-related reading and teaching modules. Both groups completed pretests and posttests. Results For interns (n=22) vs. residents (n=18), pretest median diagnostic accuracy did not differ (33% [IQR 18–46] vs. 31% [IQR 13–50], p=0.61) between groups, while posttest accuracy did (50% [IQR 42–67] vs. 20% [IQR 17–33], p=0.001). Pretest median appropriate imaging did not differ (33% [IQR 17–38] vs. 31% [IQR 13–38], p=0.89) between groups, while posttest appropriateness did (65% [IQR 52–74] vs. 25% [IQR 17–36], p<0.001). Conclusions Just 9 h of deliberate practice increased diagnostic skills (both accuracy and testing appropriateness) of medicine interns evaluating real-world dizziness ‘in silico’ more than ∼1.7 years of residency training. Applying condensed educational experiences such as VIPER across a broad range of common presentations could significantly enhance diagnostic education and translate to improved patient care.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.