Infections and Sepsis Development 2021
DOI: 10.5772/intechopen.98957
|View full text |Cite
|
Sign up to set email alerts
|

An Explainable Machine Learning Model for Early Prediction of Sepsis Using ICU Data

Abstract: Early identification of individuals with sepsis is very useful in assisting clinical triage and decision-making, resulting in early intervention and improved outcomes. This study aims to develop an explainable machine learning model with the clinical interpretability to predict sepsis onset before 6 hours and validate with improved prediction risk power for every time interval since admission to the ICU. The retrospective observational cohort study is carried out using PhysioNet Challenge 2019 ICU data from th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 41 publications
(36 reference statements)
0
2
0
Order By: Relevance
“…Moreover, we did not employ advanced deep learning method in both the recognition of sepsis and the prediction of mortality as described in previous studies. 3842 However, standardized clinical criteria has been shown to have good reliability for identifying sepsis in the EMR 43,44 and trade-off between data complexity and model interpretation also exists in deep learning algorithm. 45,46 Third, in-hospital mortality could be biased by hospital discharge practice and length of hospital stay, 47 and may not necessarily reflect the quality of care.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, we did not employ advanced deep learning method in both the recognition of sepsis and the prediction of mortality as described in previous studies. 3842 However, standardized clinical criteria has been shown to have good reliability for identifying sepsis in the EMR 43,44 and trade-off between data complexity and model interpretation also exists in deep learning algorithm. 45,46 Third, in-hospital mortality could be biased by hospital discharge practice and length of hospital stay, 47 and may not necessarily reflect the quality of care.…”
Section: Discussionmentioning
confidence: 99%
“…In many cases, the claim of interpretability is warranted, for instance in manuscripts that use methods such as logistic regression or even more-advanced methods such as explainable-boosting machines [70,71]. However, a large volume of studies (a small sampling for example [72][73][74][75][76]) are in-actuality putting forward blackboxes as explainable by using SHAP or similar methodologies.…”
Section: Critical Look At the Applied Literaturementioning
confidence: 99%