2009
DOI: 10.2202/1557-4679.1105
|View full text |Cite
|
Sign up to set email alerts
|

On the Use of K-Fold Cross-Validation to Choose Cutoff Values and Assess the Performance of Predictive Models in Stepwise Regression

Abstract: This paper addresses a methodological technique of leave-many-out cross-validation for choosing cutoff values in stepwise regression methods for simplifying the final regression model. A practical approach to choose cutoff values through cross-validation is to compute the minimum Predicted Residual Sum of Squares (PRESS). A leave-one-out cross-validation may overestimate the predictive model capabilities, for example see Shao (1993) and So et al (2000). Shao proves with asymptotic results and simulation that t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(15 citation statements)
references
References 15 publications
0
12
0
Order By: Relevance
“…The second ranking criterion is based on 10‐Fold Cross‐Validation (10FCV) to estimate the true prediction error of each model. K‐fold cross validation is a commonly used method to assess the performance of predictive models by providing accurate estimates of the true prediction error [ Mahmood and Khan , ]. In 10FCV, the data are first subset into 10‐folds (i.e., data subsets).…”
Section: Methodsmentioning
confidence: 99%
“…The second ranking criterion is based on 10‐Fold Cross‐Validation (10FCV) to estimate the true prediction error of each model. K‐fold cross validation is a commonly used method to assess the performance of predictive models by providing accurate estimates of the true prediction error [ Mahmood and Khan , ]. In 10FCV, the data are first subset into 10‐folds (i.e., data subsets).…”
Section: Methodsmentioning
confidence: 99%
“…Cross-validation was used to assess the statistical models and the model selection amongst regression models in this study. K-fold cross-validation was implemented by randomly dividing the data into k roughly equal subsamples; for each subsample, the k-1 parts were used for fitting the model and computing its error in predicting the k-th subsample, and the individual k-th parts were retained for verification [49]. In order to obtain a more stable model, k-fold cross validation is often required to be carried out n times, which is called n-repeat k-fold cross validation.…”
Section: Model Validationmentioning
confidence: 99%
“…This procedure was done by splitting the training dataset into 10 subsets and takes turns training models on all subjects except one which is held out, and computing model performance on the held-out validation dataset. In this paper, 10 models are build and evaluated for CV [Mahmood&Khan, 2009]. For each trial, a sliding window of size 125 along the time axis.…”
Section: Resultsmentioning
confidence: 99%