2012
DOI: 10.3846/20294913.2012.661205
|View full text |Cite
|
Sign up to set email alerts
|

Recent Advances on Support Vector Machines Research

Abstract: Support vector machines (SVMs), with their roots in Statistical Learning Theory (SLT) and optimization methods, have become powerful tools for problem solution in machine learning. SVMs reduce most machine learning problems to optimization problems and optimization lies at the heart of SVMs. Lots of SVM algorithms involve solving not only convex problems, such as linear programming, quadratic programming, second order cone programming, semi-definite programming, but also non-convex and more general optimizatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
36
0
1

Year Published

2013
2013
2023
2023

Publication Types

Select...
5
4
1

Relationship

3
7

Authors

Journals

citations
Cited by 152 publications
(44 citation statements)
references
References 90 publications
0
36
0
1
Order By: Relevance
“…With the real world problem presented in this paper solved there is also now scope for further work investigating the suitability of alternate techniques, such as support vector machines [19].…”
Section: Discussionmentioning
confidence: 99%
“…With the real world problem presented in this paper solved there is also now scope for further work investigating the suitability of alternate techniques, such as support vector machines [19].…”
Section: Discussionmentioning
confidence: 99%
“…More intuitively, smaller jf 冒x i 脼 脌 f 冒x j 脼j is, more smooth f(x) in the data adjacency graph is. So (17) can be translated to the following optimization…”
Section: Introductionmentioning
confidence: 99%
“…With the evolution of SVMs, they have shown much advantages in classification with small samples, nonlinear classification and high dimensional pattern recognition and also they can be applied in solving other machine learning problems [4][5][6][7][8][9][10]. The standard support vector classification attempts to minimize generalization error by maximizing the margin between two parallel hyperplanes, which results in dealing with an optimization task involving the minimization of a convex quadratic function.…”
Section: Introductionmentioning
confidence: 99%