2021
DOI: 10.1016/j.compbiomed.2021.104813
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable prediction of 3-year all-cause mortality in patients with heart failure caused by coronary heart disease based on machine learning and SHAP

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
47
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 98 publications
(50 citation statements)
references
References 31 publications
1
47
0
Order By: Relevance
“…Lundberg et al developed the SHAP model, which uses the SHAP value as a uniform measure of the importance of features used in the machine learning models ( Lundberg et al, 2020 ). By attributing output values to the Shapley value of each feature, researchers have performed interpretability analysis of machine learning models ( Wang et al, 2021 ; Wojtuch et al, 2021 ; Scavuzzo et al, 2022 ). In this study, high gene expression of TP6V1D had a positive impact on prediction, whereas low gene expression of ATP6V1D negatively impacted prediction, similar to CLIC1 .…”
Section: Discussionmentioning
confidence: 99%
“…Lundberg et al developed the SHAP model, which uses the SHAP value as a uniform measure of the importance of features used in the machine learning models ( Lundberg et al, 2020 ). By attributing output values to the Shapley value of each feature, researchers have performed interpretability analysis of machine learning models ( Wang et al, 2021 ; Wojtuch et al, 2021 ; Scavuzzo et al, 2022 ). In this study, high gene expression of TP6V1D had a positive impact on prediction, whereas low gene expression of ATP6V1D negatively impacted prediction, similar to CLIC1 .…”
Section: Discussionmentioning
confidence: 99%
“…It was feasible for physicians to use our models to evaluate the risk of DM, because the predictors used by our models were readily available in clinical practice. Additionally, the SHAP method was used to better explain the results of XGB and RF models, which has been proved to be effective in several studies [29][30][31][32]. The outcomes of XGB and RF models could be clinically understandable and be visualized intuitively.…”
Section: Discussionmentioning
confidence: 99%
“…Given the inherent powerful feature of capturing the nonlinear relationships with machine learning algorithms, more researchers advocate the use of new prediction models based on machine learning to support appropriate treatment for patients rather than traditional illness severity classification systems such as SOFA, APACHE II, or SAPS II [9][10][11]. Although a large number of predictive models have shown promising performance in research, the evidence for their application in clinical setting and interpretable risk prediction models to aid disease prognosis are still limited [12][13][14][15].…”
Section: Introductionmentioning
confidence: 99%