2002
DOI: 10.1016/s0925-2312(01)00644-0
|View full text |Cite
|
Sign up to set email alerts
|

Weighted least squares support vector machines: robustness and sparse approximation

Abstract: Least squares support vector machines (LS-SVM) is an SVM version which involves equality instead of inequality constraints and works with a least squares cost function. In this way, the solution follows from a linear Karush-Kuhn-Tucker system instead of a quadratic programming problem. However, sparseness is lost in the LS-SVM case and the estimation of the support values is only optimal in the case of a Gaussian distribution of the error variables. In this paper, we discuss a method which can overcome these t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
567
0
2

Year Published

2002
2002
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,095 publications
(570 citation statements)
references
References 22 publications
1
567
0
2
Order By: Relevance
“…However, one inherent problem of the image-as-vector representation lies in that the spatial redundancies within each image matrix are not fully utilized, and some of the information about local spatial relationships is lost [7,8]. On the other hand, although computational efficiency in LS-SVM is raised compared to SVM, loss of sparseness in its solution [9] makes every input pattern (e.g. the dimension of an input pattern is d 1 × d 2 and the number of the input patterns is l) contribute to the model, thus leading to LS-SVM have to store the training set for subsequent classification such that the memory overhead is greatly increased.…”
Section: Introductionmentioning
confidence: 99%
“…However, one inherent problem of the image-as-vector representation lies in that the spatial redundancies within each image matrix are not fully utilized, and some of the information about local spatial relationships is lost [7,8]. On the other hand, although computational efficiency in LS-SVM is raised compared to SVM, loss of sparseness in its solution [9] makes every input pattern (e.g. the dimension of an input pattern is d 1 × d 2 and the number of the input patterns is l) contribute to the model, thus leading to LS-SVM have to store the training set for subsequent classification such that the memory overhead is greatly increased.…”
Section: Introductionmentioning
confidence: 99%
“…To improve feasibility, SVM has been developed during the last decade and some effective modified SVM versions have been proposed, one of which is the Least Squares SVM (LS-SVM). Rather than solve the problem of convex quadratic programming that is required in standard SVM, LS-SVM provides a simpler solution by solving a linear matrix equation [30,31] …”
Section: Support Vector Regressionmentioning
confidence: 99%
“…Further comparison is carried out by combining the SVM identification and inverse dynamic compensation that is commonly employed to obtain a feedback controller for a certain system (without uncertainties). By observing the inequality (24), the auxiliary controllers u1 and u2 can be designed as Comparing controller (54) with (31), one can find that the controller with inverse dynamic compensation in combination with SVM identification (given by (54)) is more complicated than the controller in which the complicated nonlinear function are indentified by SVM (given by (31)). Figure 5 presents the simulation results using controller (54).…”
Section: Example Studymentioning
confidence: 99%
See 1 more Smart Citation
“…These models can be robustified and sparsified as explained in [7]. Many algorithms for solving the linear system require a positive definite matrix which is not the case here.…”
Section: Nonlinear Function Estimation Using Ls-svmsmentioning
confidence: 99%