2016
DOI: 10.1016/j.jneumeth.2016.10.008
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable deep neural networks for single-trial EEG classification

Abstract: Abstract-Background: In cognitive neuroscience the potential of Deep Neural Networks (DNNs) for solving complex classification tasks is yet to be fully exploited. The most limiting factor is that DNNs as notorious 'black boxes' do not provide insight into neurophysiological phenomena underlying a decision. Layerwise Relevance Propagation (LRP) has been introduced as a novel method to explain individual network decisions. New Method: We propose the application of DNNs with LRP for the first time for EEG data an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
246
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 331 publications
(246 citation statements)
references
References 19 publications
0
246
0
Order By: Relevance
“…Further, two cost functions were used to measure the distance between the estimated and the actual attended envelope: first, the MSE was used as a cost function C MSE . This is a widely used approach in deep learning (Bengio et al, 2007;Bengio, 2012;Sturm et al, 2016). With this cost function and the simple neural net, a linear regression is resembled by…”
Section: Neural Network Structurementioning
confidence: 99%
See 1 more Smart Citation
“…Further, two cost functions were used to measure the distance between the estimated and the actual attended envelope: first, the MSE was used as a cost function C MSE . This is a widely used approach in deep learning (Bengio et al, 2007;Bengio, 2012;Sturm et al, 2016). With this cost function and the simple neural net, a linear regression is resembled by…”
Section: Neural Network Structurementioning
confidence: 99%
“…While statistical models such as NNs are often considered to be black boxes, several methods exist to analyze the salient cues for classification learned by the net. An algorithm to analyze the relevance of the input features for producing the output (also referred to as heat mapping) was recently proposed (Bach et al, 2015;Sturm et al, 2016). In this study, the algorithm is used to identify where and when neural activity occurs that is relevant for decoding of auditory attention.…”
Section: Introductionmentioning
confidence: 99%
“…The output is a heatmap over the input features that indicates the relevance of each feature to the model output. This makes the method particularly well suited to analyzing image classifiers, though the method has also been adapted for text and electroencephalogram signal classification [31]. Samek et al [32] have also developed an objective metric for comparing the output of LRP with similar heatmapping algorithms.…”
Section: B Model Functionalitymentioning
confidence: 99%
“…The LRP technique has been used for EEG data analysis in [28]. Relevance score for each input data point is computed towards the final decision and is then visualized as a heat map providing interpretability.…”
Section: A Prior Approaches For Model Interpretabilitymentioning
confidence: 99%