2023
DOI: 10.3390/healthcare11142000
|View full text |Cite
|
Sign up to set email alerts
|

Application of SHAP for Explainable Machine Learning on Age-Based Subgrouping Mammography Questionnaire Data for Positive Mammography Prediction and Risk Factor Identification

Abstract: Mammography is considered the gold standard for breast cancer screening. Multiple risk factors that affect breast cancer development have been identified; however, there is an ongoing debate regarding the significance of these factors. Machine learning (ML) models and Shapley Additive Explanation (SHAP) methodology can rank risk factors and provide explanatory model results. This study used ML algorithms with SHAP to analyze the risk factors between two different age groups and evaluate the impact of each fact… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 56 publications
0
7
0
Order By: Relevance
“…Explainable/interpretable algorithms used are deep learning explanation algorithms: Of 14 papers, Explainer alone or with Grad-CAM, 29 interpretable deep learning, 30 Grad-CAM, 31 Fisher information network (FIN), 39 AI and Polygenic Risk Scores (PRS) algorithms, 40 DenseNet, 35 Explainability-partial, 34 Explainability-full, 34 VGG-16, 37 fine-tuned MobileNet-V2 convolutional neural network, 33 OMIG explainability 32 and BI-RADS-Net-V2 38 are used in 11 papers (78.57 %), SHAP 41 42 is used in 2 papers (14.3%) and LIME 36 is used in 1 paper (7.14%).…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Explainable/interpretable algorithms used are deep learning explanation algorithms: Of 14 papers, Explainer alone or with Grad-CAM, 29 interpretable deep learning, 30 Grad-CAM, 31 Fisher information network (FIN), 39 AI and Polygenic Risk Scores (PRS) algorithms, 40 DenseNet, 35 Explainability-partial, 34 Explainability-full, 34 VGG-16, 37 fine-tuned MobileNet-V2 convolutional neural network, 33 OMIG explainability 32 and BI-RADS-Net-V2 38 are used in 11 papers (78.57 %), SHAP 41 42 is used in 2 papers (14.3%) and LIME 36 is used in 1 paper (7.14%).…”
Section: Resultsmentioning
confidence: 99%
“…In Sun et al ’s study, 42 model-agnostic methods versus model-specific methods, post hoc (black box+SHAP) technique and three algorithms, namely, logistic regression, extreme gradient boosting and random forest performance, were evaluated by sensitivity, specificity and AUC. 42 This evaluation was used to evaluate the black box model only. Moreover, SHAP was used for visualising feature importance using a heatmap but it was not tested.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Overall, the SHAP method provides insights into the extraction of genetic features that influence predicted outcomes. With the increasing popularity of SHAP, clinical studies have increasingly used this method for explaining model features [41] , [42] . Our analytical framework effectively integrates data from different dimensions and can yield accurate results.…”
Section: Discussionmentioning
confidence: 99%