2021
DOI: 10.1101/2021.05.12.443594
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Explainable Deep Learning Approach for Multimodal Electrophysiology Classification

Abstract: In recent years, more biomedical studies have begun to use multimodal data to improve model performance. As such, there is a need for improved multimodal explainability methods. Many studies involving multimodal explainability have used ablation approaches. Ablation requires the modification of input data, which may create out-of-distribution samples and may not always offer a correct explanation. We propose using an alternative gradient-based feature attribution approach, called layer-wise relevance propagati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

7
1

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 11 publications
0
9
0
Order By: Relevance
“…Some methods have been developed that could be used for estimating the degree of confidence in an explanation. These methods chiefly involve repeatedly perturbing data samples and examining the effect of the perturbations on model performance or classification probabilities [1], [2]. Unfortunately, the utility of perturbation approaches can be reduced by high dimensional data spaces [3], and perturbation approaches can produce out-of-distribution samples that make explanations unreliable [4].…”
Section: Introductionmentioning
confidence: 99%
“…Some methods have been developed that could be used for estimating the degree of confidence in an explanation. These methods chiefly involve repeatedly perturbing data samples and examining the effect of the perturbations on model performance or classification probabilities [1], [2]. Unfortunately, the utility of perturbation approaches can be reduced by high dimensional data spaces [3], and perturbation approaches can produce out-of-distribution samples that make explanations unreliable [4].…”
Section: Introductionmentioning
confidence: 99%
“…For explainability, we used the αβ-rule (70) of layer-wise relevance propagation (LRP) (71,72). LRP is a popular approach that have been used in many studies for insight into neurological time-series and neuroimaging data (8,9,59,60,(73)(74)(75)(76)(77)(78)(79)(80). LRP involves several steps.…”
Section: Description Of Explainability Approachmentioning
confidence: 99%
“…As a result, most studies have not used explainability ( Zhang et al, 2011 ; Kwon et al, 2018 ; Niroshana et al, 2019 ; Phan et al, 2019 ; Wang et al, 2020 ; Li et al, 2021 ), which is concerning because transparency is increasingly required to assist with model development and physician decision making ( Sullivan and Schweikart, 2019 ). As such, more multimodal explainability methods need to be developed ( Lin et al, 2019 ; Mellem et al, 2020 ; Ellis et al, 2021a , b , c , d ). In this study, we use automated sleep stage classification as a testbed for the development of multimodal explainability methods.…”
Section: Introductionmentioning
confidence: 99%