2021
DOI: 10.1016/j.eswa.2021.115222
|View full text |Cite
|
Sign up to set email alerts
|

Nested cross-validation when selecting classifiers is overzealous for most practical applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
80
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 134 publications
(83 citation statements)
references
References 25 publications
1
80
0
2
Order By: Relevance
“…Training the model involved identifying the optimal model parameters that define a projection from the multivariate feature space into the probability of each sample belonging to one of the two classes (PRE, POST). We used K-fold nested cross-validation (where K = 7 subjects) to fit the model parameters, optimize the hyperparameters [13], and evaluate the out-of-sample performance of the classification model. The average area under the receiver operating characteristic curve (AUC) across outer-loops was used to measure the separability between baseline and post-stimulation neural states.…”
Section: Logistic Regression With Elastic-net Regularization To Classify Lfp Datamentioning
confidence: 99%
See 1 more Smart Citation
“…Training the model involved identifying the optimal model parameters that define a projection from the multivariate feature space into the probability of each sample belonging to one of the two classes (PRE, POST). We used K-fold nested cross-validation (where K = 7 subjects) to fit the model parameters, optimize the hyperparameters [13], and evaluate the out-of-sample performance of the classification model. The average area under the receiver operating characteristic curve (AUC) across outer-loops was used to measure the separability between baseline and post-stimulation neural states.…”
Section: Logistic Regression With Elastic-net Regularization To Classify Lfp Datamentioning
confidence: 99%
“…A machine learning classification method was used to discriminate between intracranial local field potentials (LFPs) recorded at baseline (stimulation-naïve) and after the first exposure to SCC DBS during surgical procedures. Spectral inputs (theta,(4)(5)(6)(7)(8) alpha,(9)(10)(11)(12) beta,(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) to the model were then evaluated for importance to classifier success and tested as predictors of the antidepressant response. A decline in depression scores by 45.6% was observed after 1 week and this early antidepressant response correlated with a decrease in SCC LFP beta power, which most contributed to classifier success.…”
mentioning
confidence: 99%
“…However, how to define the proportions in which the database will be segmented is a subject under development. Therefore, cross-validation strategies such as leave-one-out cross-validation (LOOCV) or k-fold cross-validation have been used more frequently than techniques such as hold-out validation because they obtain better Ac, Se and Sp in laboratory tests[ 4 - 6 ]; moreover, they consider a larger population in the training process compared to hold-out.…”
Section: To the Editormentioning
confidence: 99%
“…Therefore, overfitting was prevented since the parameter exploration was limited to a specific subset of the data. The disadvantage was that the computational cost increased highly [76]. Grid search parameter tuning and random search parameter tuning are popular ways of experimenting on the values [52,57,58].…”
Section: Hyperparameter Tuningmentioning
confidence: 99%