2002
DOI: 10.1142/5089
|View full text |Cite
|
Sign up to set email alerts
|

Least Squares Support Vector Machines

Abstract: N k=1 α k y k = 0 0 ≤ α k ≤ c, k = 1, ..., N. Note: w and ϕ(x k) are not calculated. • Mercer condition: K(x k , x l) = ϕ(x k) T ϕ(x l) • Obtained classifier: y(x) = sign[ N k=1 α k y k K(x, x k) + b] with α k positive real constants, b real constant, that follow as solution to the QP problem. Non-zero α k are called support values and the corresponding data points are called support vectors. The bias term b follows from KKT conditions. • Some possible kernels K(•, •): K(x, x k) = x T k x (linear SVM) K(x, x k… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
1,917
0
17

Year Published

2005
2005
2014
2014

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 2,478 publications
(2,033 citation statements)
references
References 0 publications
4
1,917
0
17
Order By: Relevance
“…Very many approaches have been suggested, attempted and tested. See among many references, e.g., (Chen and Billings, 1992;Harris et al, 2002;Roll et al, 2002;Sjöberg et al, 1995;Suykens et al, 2002;Vidyasagar, 1997).…”
Section: Introductionmentioning
confidence: 99%
“…Very many approaches have been suggested, attempted and tested. See among many references, e.g., (Chen and Billings, 1992;Harris et al, 2002;Roll et al, 2002;Sjöberg et al, 1995;Suykens et al, 2002;Vidyasagar, 1997).…”
Section: Introductionmentioning
confidence: 99%
“…However, to apply centering constraints (14)  new parameter y  should be added to (12) to get [13] …”
Section: Problem Definitionmentioning
confidence: 99%
“…However, the necessity to state the neural network topology in terms of the number of nodes and layers, and the necessity to compute non-convex optimization make its implementation difficult. Lately, support vector machines (SVMs) and least squares support vector machines (LS-SVMs) have revealed excellent abilities in estimating linear and nonlinear functions ( [11], [12]). Goethals et al [13] considered the extension of the N4SID family of subspace model identification schemes to the Hammerstein-type of nonlinear system.…”
Section: Introductionmentioning
confidence: 99%
“…Least Squares Support Vector Machines (LSSVM) reformulates the original SVM algorithm. It has been proposed by Suykens and Vandewalle [15] for the purpose to solve short term load prediction problems. LSSVM is reported to consume less computational effort in the huge-scale problem compared to standard SVM's.…”
Section: A Least Squares Support Vector Machinementioning
confidence: 99%
“…They perform structural risk minimization, which was introduced to machine learning by Vapnik [14] the original SVM algorithm and is reported to consume less computational effort in a huge-scale problem compared to standard SVM's [15]. For LSSVM, the regularization and kernel(s) parameters are known as hyper-parameters.…”
Section: Introductionmentioning
confidence: 99%