2008
DOI: 10.1007/s10994-008-5055-9
|View full text |Cite
|
Sign up to set email alerts
|

Efficient approximate leave-one-out cross-validation for kernel logistic regression

Abstract: Kernel logistic regression (KLR) is the kernel learning method best suited to binary pattern recognition problems where estimates of a-posteriori probability of class membership are required. Such problems occur frequently in practical applications, for instance because the operational prior class probabilities or equivalently the relative misclassification costs are variable or unknown at the time of training the model. The model parameters are given by the solution of a convex optimization problem, which may… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
62
0
1

Year Published

2009
2009
2023
2023

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 100 publications
(69 citation statements)
references
References 40 publications
0
62
0
1
Order By: Relevance
“…Cawley and Talbot (2008) propose a method that yields all leave-one-out prediction errors as a by-product of estimating (2) only once, that is, on the full sample. We derive a similar result, extended to allow for the additional linear terms in (3), in Appendix A.3.…”
Section: Selection Of Tuning Parametersmentioning
confidence: 99%
“…Cawley and Talbot (2008) propose a method that yields all leave-one-out prediction errors as a by-product of estimating (2) only once, that is, on the full sample. We derive a similar result, extended to allow for the additional linear terms in (3), in Appendix A.3.…”
Section: Selection Of Tuning Parametersmentioning
confidence: 99%
“…In this report, grid pattern searching technique [40] was used to choose the parameters of RBF. The residual sum of squares was calculated using leave-one-out crossvalidation (LOOCV) [41] during the model building.…”
Section: Methodsmentioning
confidence: 99%
“…Mika et al, 1999;Rifkin, 2002;Cawley and Talbot, 2003;Van Gestel et al, 2004). Kernel Logistic Regression, with a cross-entropy loss that is more obviously suited to statistical pattern recognition, does not out-perform kernel Fisher discriminant analysis (Cawley and Talbot, 2008). Furthermore, we have used these methods in winning entries in a number of open machine learning challenges and highly competitive baseline methods in others (Cawley, 2006;Guyon et al, 2008;Cawley, 2009Cawley, , 2011.…”
Section: Kernel Learning At the First Level Of Inferencementioning
confidence: 99%