2008 IEEE Conference on Computer Vision and Pattern Recognition 2008
DOI: 10.1109/cvpr.2008.4587354
|View full text |Cite
|
Sign up to set email alerts
|

A Parallel Decomposition Solver for SVM: Distributed dual ascend using Fenchel Duality

Abstract: We introduce a distributed algorithm for solving large scale Support Vector Machines (SVM) problems. The algorithm divides the training set into a number of processing nodes each running independently an SVM sub-problem associated with its subset of training data. The algorithm is a parallel (Jacobi) block-update scheme derived from the convex conjugate (Fenchel Duality) form of the original SVM problem. Each update step consists of a modified SVM solver running in parallel over the sub-problems followed by a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
19
0

Year Published

2009
2009
2019
2019

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(19 citation statements)
references
References 12 publications
0
19
0
Order By: Relevance
“…The solution to the optimization problem is achieved using a parallel interior point method (IPM) which computes the update rules in a distributed fashion. Hazan et al [16] presents a method for parallel SVM learning based on the parallel Jacobi block update scheme derived from the convex conjugate Fenchel duality. Unfortunately, this method cannot guarantee optimality.…”
Section: Related Workmentioning
confidence: 99%
“…The solution to the optimization problem is achieved using a parallel interior point method (IPM) which computes the update rules in a distributed fashion. Hazan et al [16] presents a method for parallel SVM learning based on the parallel Jacobi block update scheme derived from the convex conjugate Fenchel duality. Unfortunately, this method cannot guarantee optimality.…”
Section: Related Workmentioning
confidence: 99%
“…Zanni et al [14] parallelize SVM-light with improved working set selection and inner QP solver. Hazan et al [22] propose a parallel decomposition solver using Fenchel Duality. Lu et al [23] parallelize randomized sampling algorithms for SVM and SVR.…”
Section: Related Workmentioning
confidence: 99%
“…The computation time in SVM training is quadratic in terms of the number of training instances. To speed up SVM training, distributed computing paradigms have been investigated to partition a large training dataset into small data chunks and process each chunk in parallel utilizing the resources of a cluster of computers [5] [6] [7] [8]. The approaches include those that are based on the Message Passing Interface (MPI) [9].…”
Section: Introductionmentioning
confidence: 99%
“…Various forms of summarizations and aggregations are then performed to identify the final set of global support vectors. Hazen et al [5] introduced a parallel decomposition algorithm for training SVM where each computing node is responsible for a predetermined subset of the training data. The results of the subset solutions are combined and sent back to the computing nodes iteratively.…”
Section: Introductionmentioning
confidence: 99%