2009
DOI: 10.1007/978-3-642-04921-7_36
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Hold-Out for Subset of Regressors

Abstract: Abstract. Hold-out and cross-validation are among the most useful methods for model selection and performance assessment of machine learning algorithms. In this paper, we present a computationally efficient algorithm for calculating the hold-out performance for sparse regularized least-squares (RLS) in case the method is already trained with the whole training set. The computational complexity of performing the hold-out is O(|H| 3 + |H| 2 n), where |H| is the size of the hold-out set and n is the number of bas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…The computational complexity of computing the CV performance with our algorithm is no larger than the complexity of training sparse RLS. The algorithm presented in this section improves our previously proposed one (Pahikkala et al 2009b). Recall that I = {1, .…”
Section: Fast Computation Of Hold-out Errormentioning
confidence: 96%
“…The computational complexity of computing the CV performance with our algorithm is no larger than the complexity of training sparse RLS. The algorithm presented in this section improves our previously proposed one (Pahikkala et al 2009b). Recall that I = {1, .…”
Section: Fast Computation Of Hold-out Errormentioning
confidence: 96%
“…However, their method did not remove basis vectors belonging to the holdout set, an approach which we will show to be problematic. A version of this cross-validation algorithm which takes care of this issue for the reduced set approximation of RLS has been introduced by Pahikkala et al [14]. In this work, we generalize the result to arbitrary loss functions with quadratic regularization.…”
mentioning
confidence: 90%