2002
DOI: 10.1142/9789812776655
|View full text |Cite
|
Sign up to set email alerts
|

Least Squares Support Vector Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
1,378
1
20

Year Published

2006
2006
2014
2014

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 1,497 publications
(1,406 citation statements)
references
References 0 publications
2
1,378
1
20
Order By: Relevance
“…The most distinguishing part of that approach is the utility of a powerful machine learning method, Least Square Support Vector Machine (LS-SVM). 38 This novel machine learning method sets that paper apart from existing papers.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The most distinguishing part of that approach is the utility of a powerful machine learning method, Least Square Support Vector Machine (LS-SVM). 38 This novel machine learning method sets that paper apart from existing papers.…”
Section: Introductionmentioning
confidence: 99%
“…16,38,39 Both LS-SVM and e-insensitivity SVM have the merits of SVM approaches. However, the loss function used by e-insensitivity SVM, only penalizes errors greater than a threshold e. This leads to a sparse representation of the decision rule giving significant algorithmic and representation advantages.…”
Section: Introductionmentioning
confidence: 99%
“…The performance of SVM models depends on penalty term (C), part of the regularization term in quadratic optimization and kernel function, applied to the training data to improve discriminability in feature space. The RBF kernel for one-against-one classification used in this paper for a pair of support vectors i and j is defined by [20,25]: Table 1 PDF based pairwise dissimilarity metrics used in the SVM models…”
Section: Inductive Learningmentioning
confidence: 99%
“…Strictly speaking, statistical optimality such as in the filter procedure cannot be guaranteed anymore. We compare 3 induction algorithms to search for the best subset of features, in terms of discriminating normal controls from stroke patients: knearest neighbor, least squares support vector machines (LSSVM, Suykens et al 2002) and a Bayesian classifier with kernel density estimation (KDE), for KDE see (Devroye et al 2001). We use Gaussian kernels and use the maximum likelihood cross-validation method for kernel bandwidth estimation in KDE.…”
Section: Feature Subset Selectionmentioning
confidence: 99%