2021
DOI: 10.1111/acem.14190
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Versus Usual Care for Diagnostic and Prognostic Prediction in the Emergency Department: A Systematic Review

Abstract: Objective Having shown promise in other medical fields, we sought to determine whether machine learning (ML) models perform better than usual care in diagnostic and prognostic prediction for emergency department (ED) patients. Methods In this systematic review, we searched MEDLINE, Embase, Central, and CINAHL from inception to October 17, 2019. We included studies comparing diagnostic and prognostic prediction of ED patients by ML models to usual care methods (triage‐based scores, clinical prediction tools, cl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
26
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(29 citation statements)
references
References 47 publications
(91 reference statements)
2
26
1
Order By: Relevance
“…Potential sources of bias that may cause flawed or distorted model predictions were found in every model, for example, from minor (not reporting handling of missing values [ 38 , 39 , 43 , 44 , 47 ], univariate predictor selection [ 39 , 47 ]) to potentially damaging (dichotomized continuous variables [ 22 , 41 , 43 ], low events-per-variable [ 44 ], no external validation [ 38 , 41 , 42 , 44 , 47 ]), which suggest that study reports of models’ abilities to predict outcomes have the potential to be flawed. This is consistent with other evaluations of prediction modeling studies [ 34 ], including evaluations applying CHARMS and PROBAST in the emergency department setting [ 35 , 36 ].…”
Section: Discussionsupporting
confidence: 90%
See 1 more Smart Citation
“…Potential sources of bias that may cause flawed or distorted model predictions were found in every model, for example, from minor (not reporting handling of missing values [ 38 , 39 , 43 , 44 , 47 ], univariate predictor selection [ 39 , 47 ]) to potentially damaging (dichotomized continuous variables [ 22 , 41 , 43 ], low events-per-variable [ 44 ], no external validation [ 38 , 41 , 42 , 44 , 47 ]), which suggest that study reports of models’ abilities to predict outcomes have the potential to be flawed. This is consistent with other evaluations of prediction modeling studies [ 34 ], including evaluations applying CHARMS and PROBAST in the emergency department setting [ 35 , 36 ].…”
Section: Discussionsupporting
confidence: 90%
“…As the number of prediction modeling publications continue to grow, the need exists to apply the same rigor to systematic reviews of health care–related prediction modeling as that which has been applied to clinical trial and other types of systematic reviews through the use of tools, such as PROBAST and CHARMS, to facilitate quality assessment for individual prediction model studies using standardized guidelines [ 30 , 33 ]. Only two systematic reviews [ 35 , 36 ] that have focused on increasing overall throughput by decreasing emergency department boarding and systemic exit block in health systems applied the rigorous PROBAST and CHARMS methodologies, with both reporting a high degree of bias in the studies that they examined.…”
Section: Introductionmentioning
confidence: 99%
“…A systematic review of 23 studies about machine learning for diagnostic and prognostic predictions in emergency departments found that analysis was the most poorly rated domain, with 20 studies at high risk of bias. 32 This study found deficiencies in how continuous variables and missing data were handled, and that model calibration was rarely reported. Another publication about machine learning risk prediction models for triage of patients in the emergency department also considered 22/25 studies at high risk of bias.…”
Section: Discussionmentioning
confidence: 93%
“…However, the authors mentioned that many studies have limited applicability to clinical practice and there are other considerations more than performance metrics as well as barriers that should be taken into account to have successful ML models in real life. 51 Therefore, despite the great research successes in building ML-based predictive models for clinical practice, there remain few examples of ML models being successfully integrated into the daily routine or critical parts of clinical environments. 52 This reveals that what is being done in research is not completely in line with the realities of clinical practice.…”
Section: Discussionmentioning
confidence: 99%