xECGArch: a trustworthy deep learning architecture for interpretable ECG analysis considering short-term and long-term features
Marc Goettling,
Alexander Hammer,
Hagen Malberg
et al.
Abstract:Deep learning-based methods have demonstrated high classification performance in the detection of cardiovascular diseases from electrocardiograms (ECGs). However, their blackbox character and the associated lack of interpretability limit their clinical applicability. To overcome existing limitations, we present a novel deep learning architecture for interpretable ECG analysis (xECGArch). For the first time, short- and long-term features are analyzed by two independent convolutional neural networks (CNNs) and c… Show more
Deep learning (DL) has demonstrated high accuracy in ECG analysis but lacks in explainability. Although explanations can be estimated using explainable artificial intelligence, their causality has not yet been sufficiently investigated. We present a generalizable method for extensively validating the DL explanations’ causality by relating them to clinically relevant ECG characteristics. We applied xECGArch, combining a long-term and a short-term model, for atrial fibrillation (AF) detection in 1,521 single-lead ECGs, achieving an accuracy of 96.3%. The explanations match the diagnostic criteria of AF regarding rhythm and morphology. While the short-term model emphasizes morphology features such as P and fibrillatory waves, the long-term model focuses on QRS complexes. Moreover, the long-term model explanations strongly correlate with rhythm (\(p<0.001\)). For improved clinical interpretability, we introduce a fused representation (xFuseMap), highlighting relevant explanations for rhythm and morphology. We thus demonstrate an explainable and interpretable DL application with potential for providing diagnostic support.
Deep learning (DL) has demonstrated high accuracy in ECG analysis but lacks in explainability. Although explanations can be estimated using explainable artificial intelligence, their causality has not yet been sufficiently investigated. We present a generalizable method for extensively validating the DL explanations’ causality by relating them to clinically relevant ECG characteristics. We applied xECGArch, combining a long-term and a short-term model, for atrial fibrillation (AF) detection in 1,521 single-lead ECGs, achieving an accuracy of 96.3%. The explanations match the diagnostic criteria of AF regarding rhythm and morphology. While the short-term model emphasizes morphology features such as P and fibrillatory waves, the long-term model focuses on QRS complexes. Moreover, the long-term model explanations strongly correlate with rhythm (\(p<0.001\)). For improved clinical interpretability, we introduce a fused representation (xFuseMap), highlighting relevant explanations for rhythm and morphology. We thus demonstrate an explainable and interpretable DL application with potential for providing diagnostic support.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.