2010
DOI: 10.1016/j.csda.2010.01.024
|View full text |Cite
|
Sign up to set email alerts
|

Optimized fixed-size kernel models for large data sets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
60
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 98 publications
(60 citation statements)
references
References 48 publications
0
60
0
Order By: Relevance
“…Since each output is considered independently, these covariance matrices have dimensions N xN , N x1 and 1x1 respectively. Following the technique presented in [19] to optimize the regularization parameter for FS-LS-SVMs, the covariance matrix A on the training and validation set combined can be calculated as follows:…”
Section: Covariance Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Since each output is considered independently, these covariance matrices have dimensions N xN , N x1 and 1x1 respectively. Following the technique presented in [19] to optimize the regularization parameter for FS-LS-SVMs, the covariance matrix A on the training and validation set combined can be calculated as follows:…”
Section: Covariance Methodsmentioning
confidence: 99%
“…This formula can be used on the entire training set U to generate the input data X. For more details on this process we refer to [12] and [19]. We apply the FS-LS-SVM approach to the UCI Adult dataset as described in [15] and the heat exchanger task as described in [16].…”
Section: Fixed-size Least-squares Support Vector Machines Applied To mentioning
confidence: 99%
“…Furthermore, the algorithm can be tailored to a given application by using the most appropriate kernel function. Beyond that, by using sparse formulations and a fixed-size (Suykens et al 2002, De Brabanter et al 2010) approach, it is possible to readily handle big data. Finally, by means of adequate adaptations of the core algorithm, hierarchical clustering and a soft clustering approach have been proposed.…”
Section: Introductionmentioning
confidence: 99%
“…As a consequence, it needs a high computational load in training and has bad robustness. To overcome these drawbacks, many efforts have been made by Suykens [12], De Kruif and De Vries [13], Hoegaerts [14], Zeng and Chen [15] and Jiao [16] et al For large data sets, Brabanter et al [17] and Karsmakers et al [6] recently developed the fixed-size kernel (SVR) modeling method. More recently, a novel and much sparser LSSVR method named improved recursive reduced LSSVR (IRR-LSSVR) is proposed by Zhao and Sun et al [18] after combining a reduced technique [19] with the iterative strategy [16].…”
Section: Introductionmentioning
confidence: 99%