2020
DOI: 10.1055/s-0040-1713750
|View full text |Cite
|
Sign up to set email alerts
|

Feasibility and Reliability Testing of Manual Electronic Health Record Reviews as a Tool for Timely Identification of Diagnostic Error in Patients at Risk

Abstract: Background Although diagnostic error (DE) is a significant problem, it remains challenging for clinicians to identify it reliably and to recognize its contribution to the clinical trajectory of their patients. The purpose of this work was to evaluate the reliability of real-time electronic health record (EHR) reviews using a search strategy for the identification of DE as a contributor to the rapid response team (RRT) activation. Objectives Early and accurate recognition of critical illness is of par… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 34 publications
0
5
0
Order By: Relevance
“…Three studies focused on measurement and surveillance of diagnostic errors. The prospective observational study by Soleimani et al, 31 focused on accuracy and timeliness, used medical record review that applied diagnostic criteria (e.g., a new diagnostic label within 24 hours after rapid response team and whether any features, indicative of that diagnosis, were present for greater than 6 hours before the first documentation of that new diagnosis). Where there was disagreement, a second review was conducted using the taxonomy delineating stages in the diagnostic process used in the study by Schiff et al 12 : history, physical exam, testing, assessment, referral/consultation, and follow-up.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Three studies focused on measurement and surveillance of diagnostic errors. The prospective observational study by Soleimani et al, 31 focused on accuracy and timeliness, used medical record review that applied diagnostic criteria (e.g., a new diagnostic label within 24 hours after rapid response team and whether any features, indicative of that diagnosis, were present for greater than 6 hours before the first documentation of that new diagnosis). Where there was disagreement, a second review was conducted using the taxonomy delineating stages in the diagnostic process used in the study by Schiff et al 12 : history, physical exam, testing, assessment, referral/consultation, and follow-up.…”
Section: Resultsmentioning
confidence: 99%
“…Nine studies 20,23,24,26,29,30,32,33,35 indicated using the NASEM definition, 5 of those 20,23,24,30,32 operationalized it using a definition proposed before the NASEM report (see Table 3 for list of definitions). Three studies 21,31,34 operationalized error using existing definitions, and 4 studies 22,25,27,28 operationalized components of the NASEM definition (i.e., accuracy, timeliness, communication) for the purpose of the study and did not cite existing definitions. To capture content focus, we grouped studies according to the area of focus for which the definition was used: epidemiology, patient perspectives, measurement/ surveillance, and clinician perspectives.…”
Section: Resultsmentioning
confidence: 99%
“…We did try to reach several members of the responding or admitting teams to gather multiple perceptions. Our acute hospital learning laboratory is developing strategies to identify DE in the EHR in near real-time, 22 and we plan to correlate the findings of DE from surveys with the EHR data.…”
Section: Discussionmentioning
confidence: 99%
“…Electronic health records (EHRs) cannot reliably capture all of the cues used by clinicians as they formulate a diagnosis. Discerning clinicians’ thought process is something that traditional measurement mechanisms including autopsies, 19,20 chart reviews, 21,22 and malpractice claims 23,24 often fail to do. When prompted, clinicians are well positioned to recognize errors and to lead efforts to mitigate their impact 13 .…”
mentioning
confidence: 99%
“…3 As interest in this area has increased, studies have used various methods to better understand the prevalence and sources of DEOD. These methods include experimental simulations 4 ; reviewing charts, 5 patient and family medical complaints, 6 and malpractice claims 7 , 8 ; dashboards for proactive identification 9 ; physician surveys 10 - 12 or self-report 13 ; and a combined approach of observation and focus groups and interviews. 14 However, the precise rate of diagnostic error remains difficult to determine due to differences in methods used and how diagnostic error is defined.…”
Section: Introductionmentioning
confidence: 99%