2022
DOI: 10.1016/j.bspc.2022.103584
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI decision model for ECG data of cardiac disorders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 48 publications
(21 citation statements)
references
References 31 publications
0
16
0
Order By: Relevance
“…Deep learning based agnostic methods have rarely been applied to predict hypotension, to the best of our knowledge. However, an attempt similar to our task was presented in [17] and [22], where electrocardiogram signal was employed as time-series input to predict arrhythmia. Among the three model-agnostic methods of SHAP, LIME, and Anchor applied in [17], only SHAP was evaluated to have an adequate interpretability in signal analysis.…”
Section: B Explainable Ai and Hypotension Prediction Model Interpreta...mentioning
confidence: 99%
“…Deep learning based agnostic methods have rarely been applied to predict hypotension, to the best of our knowledge. However, an attempt similar to our task was presented in [17] and [22], where electrocardiogram signal was employed as time-series input to predict arrhythmia. Among the three model-agnostic methods of SHAP, LIME, and Anchor applied in [17], only SHAP was evaluated to have an adequate interpretability in signal analysis.…”
Section: B Explainable Ai and Hypotension Prediction Model Interpreta...mentioning
confidence: 99%
“…In existing state-of-the-art schemes, researchers have proposed explainability solutions in various practical healthcare applications such as COVID-19 detection and prediction [27], [28], cardio-vascular disease [29], [30], biomedical engineering [31], structural health monitoring [32], mental health diagnosis [33] and many more. The solutions also explore techniques to utilize DL techniques and the implementation of algorithms to provide global and local explanations.…”
Section: Novelty Of the Proposed Surveymentioning
confidence: 99%
“…The work in [92] describes existing AI techniques like ML/DL/NLP in and extends the survey to explain the importance of EXAI in future medicine and biomedical applications. Authors in [29] implements analyze the medical ECG data on proposing a generalized model, ST-CNN-GAP-5, that implements DNN algorithms using online available two ECG datasets with the achieved accuracy of 95.8% and AUC value of 99.46%. The dataset is analyzed using SHAP for explainability.…”
Section: State-of-the-artmentioning
confidence: 99%
“…Hughes et al [33] proposed to use of Linear Interpretable Model-Agnostic Explanations (LIME). Zhang et al [22], Anand et al [34] applied SHapley Additive exPlanations (SHAP) analysis to test the interpretability of an ECG classification model. LIME and SHAP are both perturbation-based techniques that provide explanations based on the variation of output after applying perturbations to input.…”
Section: Reduced-lead Ecg Classificationmentioning
confidence: 99%