2003
DOI: 10.1016/s0167-8191(03)00021-8
|View full text |Cite
|
Sign up to set email alerts
|

A parallel solver for large quadratic programs in training support vector machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
69
0
1

Year Published

2005
2005
2021
2021

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 120 publications
(70 citation statements)
references
References 14 publications
0
69
0
1
Order By: Relevance
“…The above major contributions summarize the work of how to implement a fast classic SVM in sequential programming. Some earlier works using parallel techniques in SVM can be found in [4,6,23] and [12]. Cao et al presented a very practical Parallel SMO [2] implemented with Message Passing Interface on a cluster system.…”
Section: Background and Related Workmentioning
confidence: 99%
“…The above major contributions summarize the work of how to implement a fast classic SVM in sequential programming. Some earlier works using parallel techniques in SVM can be found in [4,6,23] and [12]. Cao et al presented a very practical Parallel SMO [2] implemented with Message Passing Interface on a cluster system.…”
Section: Background and Related Workmentioning
confidence: 99%
“…For example, Zanghirati and Zanni [28] proposed a parallel implementation of SVM-light, especially effective for Gaussian kernels; Cao et al [3] also parallelized a slightly modified SMO algorithm. For these two papers, the authors conducted experiments on up to 32 processors with 60k training samples, claiming a speed-up of approximately 20 times.…”
Section: Sequential Minimal Optimization (Smo)mentioning
confidence: 99%
“…Unlike sequential or centralized approaches in the literature [2], [4]- [7], the focus here is exclusively on parallel update schemes for machine learning allowing individual processing units to do simultaneous computations. Most of the existing contributions investigate training SVMs either sequentially or in parallel first and then fusing them into a centralized classifier.…”
Section: Introductionmentioning
confidence: 99%