2018
DOI: 10.1016/j.neuroimage.2018.06.076
|View full text |Cite
|
Sign up to set email alerts
|

Reproducibility of importance extraction methods in neural network based fMRI classification

Abstract: Recent advances in machine learning allow faster training, improved performance and increased interpretability of classification techniques. Consequently, their application in neuroscience is rapidly increasing. While classification approaches have proved useful in functional magnetic resonance imaging (fMRI) studies, there are concerns regarding extraction, reproducibility and visualization of brain regions that contribute most significantly to the classification. We addressed these issues using an fMRI class… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 57 publications
0
4
0
Order By: Relevance
“…The motivation to analyze our data using a neural network classifier, instead of univariate analyses, such as GLM, is twofold: First, classification accuracies directly quantify the discriminability across conditions. Second, previous work has shown that importance maps generated from neural network classifiers can reveal multivariate patterns and patterns with low univariate information (Gotsopoulos et al, 2018). Such information may not be available when performing univariate analysis, such as GLM, or when feeding the classifier with data preprocessed by a univariate method, such as GLM coefficients.…”
Section: Multivariate Pattern Analysis (Mvpa)mentioning
confidence: 99%
See 2 more Smart Citations
“…The motivation to analyze our data using a neural network classifier, instead of univariate analyses, such as GLM, is twofold: First, classification accuracies directly quantify the discriminability across conditions. Second, previous work has shown that importance maps generated from neural network classifiers can reveal multivariate patterns and patterns with low univariate information (Gotsopoulos et al, 2018). Such information may not be available when performing univariate analysis, such as GLM, or when feeding the classifier with data preprocessed by a univariate method, such as GLM coefficients.…”
Section: Multivariate Pattern Analysis (Mvpa)mentioning
confidence: 99%
“…A group brain mask was determined by selecting voxels that were present (i.e., had non-zero standard deviation values) in all participants, resulting in a total of 203477 voxels. Classification of different trial types was performed with a linear (i.e., no hidden layers) artificial neural network classifier, as implemented in an in-house developed neural network toolbox that has been previously used to classify fMRI data (Gotsopoulos et al, 2018; available at https://github.com/gostopa1/DeepNNs). The classifier utilized softmax activation function in the output layer and cross-entropy loss function.…”
Section: Multivariate Pattern Analysis (Mvpa)mentioning
confidence: 99%
See 1 more Smart Citation
“…To overcome these issues, relevancy (or saliency) backpropagation methods have been proposed in the literature [26] . Since all back-propagation based approaches depend on gradient computation, they face saturation problems and may produce misleading results at the discontinuities of activation functions [27] . Several attempts have been made recently to determine relevancy (importance, contributions, or saliency) of features that are most discriminative for classification using DNN models [26] .…”
Section: Introductionmentioning
confidence: 99%