2007
DOI: 10.1109/ijcnn.2007.4371075
|View full text |Cite
|
Sign up to set email alerts
|

On Extending the SMO Algorithm Sub-Problem

Abstract: Abstract-The Support Vector Machine is a widely employed machine learning model due to its repeatedly demonstrated superior generalization performance. The Sequential Minimal Optimization (SMO) algorithm is one of the most popular SVM training approaches. SMO is fast, as well as easy to implement; however, it has a limited working set size (2 points only). Faster training times can result if the working set size can be increased without significantly increasing the computational complexity. In this paper, we e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
3
0

Year Published

2008
2008
2014
2014

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…Training methods using variable size chunking are sometimes called active set training. The well known training method with fixed-size chunking is Se quential Minimum Optimization (SMO) [5], which optimizes two data at a time and it is extended to optimizing four data at a time [6]. As a variant of decomposition-based training methods, the exact incremental training [7] is extended to batch training [8].…”
Section: Introductionmentioning
confidence: 99%
“…Training methods using variable size chunking are sometimes called active set training. The well known training method with fixed-size chunking is Se quential Minimum Optimization (SMO) [5], which optimizes two data at a time and it is extended to optimizing four data at a time [6]. As a variant of decomposition-based training methods, the exact incremental training [7] is extended to batch training [8].…”
Section: Introductionmentioning
confidence: 99%
“…Kecman, Vogt, and Huang [5] showed that this algorithm is equivalent to the kernel-Adatron algorithm. Sentelle et al [6] extended SMO to optimizing four data at a time.…”
Section: Introductionmentioning
confidence: 99%
“…To improve convergence, more than two variables are optimized at a time [22][23][24][25][26][27]. In [22], q (≥ 2) modifiable variables are selected in the steepest ascent direction and are optimized.…”
mentioning
confidence: 99%