2020
DOI: 10.1200/cci.20.00002
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning–Based Interpretation and Visualization of Nonlinear Interactions in Prostate Cancer Survival

Abstract: PURPOSE Shapley additive explanation (SHAP) values represent a unified approach to interpreting predictions made by complex machine learning (ML) models, with superior consistency and accuracy compared with prior methods. We describe a novel application of SHAP values to the prediction of mortality risk in prostate cancer. METHODS Patients with nonmetastatic, node-negative prostate cancer, diagnosed between 2004 and 2015, were identified using the National Cancer Database. Model features were specified a prior… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
42
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 59 publications
(45 citation statements)
references
References 16 publications
3
42
0
Order By: Relevance
“…The risk and corresponding hazard ratio (HR) for severe condition were analysed using Python by fourfold cross-validation. Cut-off values of BG to predict COVID-19 severity were analysed using Python by the SHAP (SHapley Additive exPlanations) method [14]. A difference with a two-tailed P value < 0.05 was considered statistically significant.…”
Section: Discussionmentioning
confidence: 99%
“…The risk and corresponding hazard ratio (HR) for severe condition were analysed using Python by fourfold cross-validation. Cut-off values of BG to predict COVID-19 severity were analysed using Python by the SHAP (SHapley Additive exPlanations) method [14]. A difference with a two-tailed P value < 0.05 was considered statistically significant.…”
Section: Discussionmentioning
confidence: 99%
“…With this approach, exact solutions can be found in the case of tree-based models [33,34]. SHAP values have been used fairly extensively in recent biomedical applications [65,32,66,67]. For each of the assessments, we provide overall global SHAP values across all 100 simulations.…”
Section: Identifying Digital Measures Of Interestmentioning
confidence: 99%
“…Overall, their purpose is to generate an explicit knowledge representation (in terms understandable to humans) of the models' inner workings and of how they generate their predictions [23]. The use of explainable ML (XML) as a novel paradigm has started to grow in health care [24][25][26] and has been used in a few studies in oncology [27][28][29][30][31], but its potential remains largely unexplored and underused.…”
Section: Introductionmentioning
confidence: 99%