2016 International Joint Conference on Neural Networks (IJCNN) 2016
DOI: 10.1109/ijcnn.2016.7727590
|View full text |Cite
|
Sign up to set email alerts
|

A convergent and fully distributable SVMs training algorithm

Abstract: The Support Vector Machines (SVMs) dual formulation has a non-separable structure that makes the design of a convergent distributed algorithm a very difficult task. Recently some separable and distributable reformulations of the SVM training problem have been obtained by fixing one primal variable. While this strategy seems effective for some applications, in certain cases it could be weak since it drastically reduces the overall final performance. In this work we present the first fully distributable algorith… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1

Relationship

4
2

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 8 publications
0
9
0
Order By: Relevance
“…Therefore, the overall Algorithm 1, executing iteratively a distributed minimization over the variables x, can be viewed itself as a distributed algorithm (see e.g. [31,37]). Nevertheless, solving an (inexact) optimization problem at each iteration may be numerically inefficient.…”
Section: Algorithm 1: Basic Augmented Lagrangian Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Therefore, the overall Algorithm 1, executing iteratively a distributed minimization over the variables x, can be viewed itself as a distributed algorithm (see e.g. [31,37]). Nevertheless, solving an (inexact) optimization problem at each iteration may be numerically inefficient.…”
Section: Algorithm 1: Basic Augmented Lagrangian Methodsmentioning
confidence: 99%
“…The latter is the case of support vector machines training, because in its dual quadratic formulation a column of the matrix Q requires in general O(n 2 ) nonlinear calculations [28,30]. Nonetheless, we underline that the above scheme is not directly applicable to the support vector machines quadratic formulation due to the presence of a coupling linear constraint (m = 1) [31].…”
Section: Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…Generally, according to the type of ML architecture, the training phase is formulated as a specific optimization problem. Among all existing architectures, some of the most popular are the Support Vector Machines (SVMs) [18], [19] and the Artificial Neural Networks (ANNs) [20]. While SVMs are mainly used for classification tasks, ANNs are used for both classification and regression.…”
Section: An Overview On Supervised Learning and Neural Networkmentioning
confidence: 99%
“…On the other hand, it is well-known that in certain cases, for example when the distribution of the data in the two classes is uneven (see [30]), the bias plays a crucial role concerning the generalization performance. Recently a new provably convergent method has been proposed in [26] that, iteratively adjusting the offset values, computes a solution of the bias problem (2) by solving a sequence of dual formulations that do not include the difficult equality constraint and, then, can be solved in parallel. Even if this method solves the more general bias version, it requires multiple parallel optimizations in order to solve any single SVM training.…”
mentioning
confidence: 99%