Proceedings of the 24th International Conference on Machine Learning 2007
DOI: 10.1145/1273496.1273552
|View full text |Cite
|
Sign up to set email alerts
|

Kernelizing PLS, degrees of freedom, and efficient model selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(26 citation statements)
references
References 11 publications
0
26
0
Order By: Relevance
“…negative Degrees of Freedom). In Krämer and Braun (2007), the sparse structure of L is used and an additional stopping criterion is imposed to ensure that the latent components are orthogonal. However, Figure 6: Illustration: Scaled mean test error as a function of the number of components.…”
Section: Numerical Stability and Runtimementioning
confidence: 99%
“…negative Degrees of Freedom). In Krämer and Braun (2007), the sparse structure of L is used and an additional stopping criterion is imposed to ensure that the latent components are orthogonal. However, Figure 6: Illustration: Scaled mean test error as a function of the number of components.…”
Section: Numerical Stability and Runtimementioning
confidence: 99%
“…, Me p } was obtained from Eq. (28), and the corresponding set of numbers of latent variables for the gates Mg = {Mg 2 , . .…”
Section: Evaluation and Discussionmentioning
confidence: 99%
“…Usually the DOF is set to be equal to the number of latent variables, but this is a wrong assumption and does not lead to satisfactory results in the selection of the number of latent variables [28,29]. This problem of determining the DOF in a PLS model was addressed in [29], where an unbiased estimate of the DOF has been proposed.…”
Section: Selecting the Number Of Latent Variablesmentioning
confidence: 99%
“…6.7 in [3]. Although PLS is usually introduced as an iterative procedure without explicit objective functions, it was shown [27] that kernel PLS minimizes the same objective function as ordinary least squares, but the solution is restricted within a subspace spanned by Ky, K 2 y, . .…”
Section: Partial Least Squaresmentioning
confidence: 99%
“…To determine r, see [27] for a comparison among various criteria. In all kernelized PLS algorithms known so far, the computational cost is dominated by the matrix-vector products.…”
Section: Partial Least Squaresmentioning
confidence: 99%