2017
DOI: 10.1016/j.neuroimage.2016.10.038
|View full text |Cite
|
Sign up to set email alerts
|

Assessing and tuning brain decoders: Cross-validation, caveats, and guidelines

Abstract: Decoding, i.e. prediction from brain images or signals, calls for empirical evaluation of its predictive power. Such evaluation is achieved via cross-validation, a method also used to tune decoders' hyper-parameters. This paper is a review on cross-validation procedures for decoding in neuroimaging. It includes a didactic overview of the relevant theoretical considerations. Practical aspects are highlighted with an extensive empirical study of the common decoders in within- and across-subject predictions, on m… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
560
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 647 publications
(595 citation statements)
references
References 56 publications
(109 reference statements)
3
560
0
Order By: Relevance
“…In accordance with recent recommendations, the current study used 10-fold cross-validation, which has been showed to be less susceptible to overly optimistic estimates as compared with a leave-one-out approach (LOO-CV) (Varoquaux et al 2016). Moreover, we repeated the cross-validation procedure 250 times, averaging the prediction performance over all replications to obtain robust and generalizable estimates of the capability of different brain networks to predict personality scores in new individuals.…”
Section: Discussionmentioning
confidence: 99%
“…In accordance with recent recommendations, the current study used 10-fold cross-validation, which has been showed to be less susceptible to overly optimistic estimates as compared with a leave-one-out approach (LOO-CV) (Varoquaux et al 2016). Moreover, we repeated the cross-validation procedure 250 times, averaging the prediction performance over all replications to obtain robust and generalizable estimates of the capability of different brain networks to predict personality scores in new individuals.…”
Section: Discussionmentioning
confidence: 99%
“…This experiment is carried on COBRE, to predict schizophrenia, and on ADNI, to predict AD profiles. Models are evaluated with a cross-validation procedure as recom-290 mended in Varoquaux et al (2016): The data are split into stratified train/test sets at the subject level, to avoid fitting and testing on data from the same subject. Splits are randomized over 100 runs.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…One possible approach is to exclude the entire test set from the model selection procedure using a nested nested cross-validation strategy. An alternative approach is employing model averaging techniques to neatly get advantage of the whole dataset (Varoquaux et al, 2017). Since our focus is on the model selection, in the remaining text, we implicitly assume the test data is excluded from the experiments, thus, all the experimental results are reported on the training and validation sets.…”
Section: Methodsmentioning
confidence: 99%
“…The most common model selection criterion is based on an estimator of generalization performance, i.e., the predictive power. In the context of brain decoding, especially when the interpretability of brain maps matters, employing the predictive power as the only decisive criterion in model selection is problematic in terms of interpretability of MBMs (Gramfort et al, 2012; Rasmussen et al, 2012; Conroy et al, 2013; Varoquaux et al, 2017). Valverde-Albacete and Peláez-Moreno (2014) experimentally showed that in a classification task optimizing only classification error rate is insufficient to capture the transfer of crucial information from the input to the output of a classifier.…”
Section: Methodsmentioning
confidence: 99%