2006
DOI: 10.1007/s10287-005-0003-7
|View full text |Cite
|
Sign up to set email alerts
|

Fixed-size Least Squares Support Vector Machines: A Large Scale Application in Electrical Load Forecasting

Abstract: Least squares support vector machines, Nyström approximation, Fixed-size LS-SVM, Kernel based methods, Sparseness, Primal space regression, Load forecasting, Time series,

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
32
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 86 publications
(36 citation statements)
references
References 24 publications
(23 reference statements)
0
32
0
Order By: Relevance
“…To cope with the large number of datapoints, a fixed-size least squares support vector machine (LS-SVM) was adopted. This method selects a (small) fixed number of training datapoints M (M N) representing the underlying distribution of the dataset through maximization of the quadratic Renyi entropy [29]. A Radial Basis Function (RBF) kernel was used during this active selection of the support vectors, and its bandwidth parameter was computed according to the rule of thumb: The factor was tuned experimentally and set equal to 0.1.…”
Section: Sleep Stage Classificationmentioning
confidence: 99%
“…To cope with the large number of datapoints, a fixed-size least squares support vector machine (LS-SVM) was adopted. This method selects a (small) fixed number of training datapoints M (M N) representing the underlying distribution of the dataset through maximization of the quadratic Renyi entropy [29]. A Radial Basis Function (RBF) kernel was used during this active selection of the support vectors, and its bandwidth parameter was computed according to the rule of thumb: The factor was tuned experimentally and set equal to 0.1.…”
Section: Sleep Stage Classificationmentioning
confidence: 99%
“…Least squares support vector machine (LS-SVM) is reformulations from standard SVM (Vapnik, 1995) which lead to solving linear Karush-Kuhn-Tucker (KKT) systems. LS-SVM is closely related to regularization networks and Gaussian processes but additionally emphasizes and exploits primal-dual interpretations (Espinoza et al, 2006).…”
Section: Function Estimation Using Ls-svmmentioning
confidence: 99%
“…as this is done implicitly through the use of positive definite kernel functions K (Espinoza et al, 2006). From the Lagrange function …”
Section: Function Estimation Using Ls-svmmentioning
confidence: 99%
“…LSSVM adopts a least squares linear system as a loss function instead of the quadratic program in original SVM which is time consuming in training process [26][27][28][29][30]. The LSSVM shows manifest advantages, such as good nonlinear fitting ability, strong generalization capability, fast computing speed, dealing with small samples, not relying on the distribution characteristics of the samples and so on [31][32][33][34]. The performance of the LSSVM model is largely dependent on the selection of the parameters.…”
Section: Introductionmentioning
confidence: 99%