The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2023
DOI: 10.1007/s11227-023-05356-3
|View full text |Cite
|
Sign up to set email alerts
|

XAI–reduct: accuracy preservation despite dimensionality reduction for heart disease classification using explainable AI

Abstract: Machine learning (ML) has been used for classification of heart diseases for almost a decade, although understanding of the internal working of the black boxes, i.e., non-interpretable models, remain a demanding problem. Another major challenge in such ML models is the curse of dimensionality leading to resource intensive classification using the comprehensive set of feature vector (CFV). This study focuses on dimensionality reduction using explainable artificial intelligence, without negotiating on accuracy f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 24 publications
(26 reference statements)
0
4
0
Order By: Relevance
“…contribution of each input variable to the model's predictions, allowing for a better understanding of the model's decisionmaking process. In the context of medical diagnosis, SHAP can help identify which hematological indicators are most influential in predicting acute heart failure such as the work recently done in(28)(29)(30).…”
mentioning
confidence: 99%
“…contribution of each input variable to the model's predictions, allowing for a better understanding of the model's decisionmaking process. In the context of medical diagnosis, SHAP can help identify which hematological indicators are most influential in predicting acute heart failure such as the work recently done in(28)(29)(30).…”
mentioning
confidence: 99%
“…Similarly, the XAI technology is used in intrusion detection systems as experimented by Patil et al 34 using the LIME‐based analysis over classification algorithms like DT, RF, and SVM. Another similar study using XGBoost is used in intrusion detection over SHAP‐based analysis and obtained an accuracy of 93.28% as experimented by Barnard et al 35 There are numerous applications of XAI in feature analysis and explainability of the algorithms in the healthcare domain, Das et al 36 have experimented with XAI technology feature contributions and feature weights using SHAP for heart disease classification. Shad et al 37 have experimented with XAI Alzheimer's disease prediction over the different neural networks models like Resnet50, VGG16, and Inception v3, using LIME analysis, and the models have obtained an accuracy of 82%, 86%, 82% respectively.…”
Section: Literature Reviewmentioning
confidence: 91%
“…Explainability methods are compared on different machine learning model. Five explainable models are used to show top features contributing in diagnosing heart diseases [71]. Pneumonia infection classification using transfer learning and explainability using LIME for chest X-ray images are performed.…”
Section: Shap (Shapley Additive Explanations)mentioning
confidence: 99%