2018
DOI: 10.1007/s10589-018-9987-0
|View full text |Cite
|
Sign up to set email alerts
|

Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training

Abstract: We consider the convex quadratic linearly constrained problem with bounded variables and with huge and dense Hessian matrix that arises in many applications such as the training problem of bias support vector machines. We propose a decomposition algorithmic scheme suitable to parallel implementations and we prove global convergence under suitable conditions. Focusing on support vector machines training, we outline how these assumptions can be satisfied in practice and we suggest various specific implementation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
8
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 26 publications
0
8
0
Order By: Relevance
“…Different working set selection rules, based on sufficient predicted descent, have also been studied in [30]. Moreover, a Jacobi-type algorithm has been devised in [15] and a class of parallel decomposition methods for quadratic objective functions has been proposed in [18].…”
Section: Introductionmentioning
confidence: 99%
“…Different working set selection rules, based on sufficient predicted descent, have also been studied in [30]. Moreover, a Jacobi-type algorithm has been devised in [15] and a class of parallel decomposition methods for quadratic objective functions has been proposed in [18].…”
Section: Introductionmentioning
confidence: 99%
“…This plays a fundamental role in applications where either different pieces of information are available only to different processes or if the knowledge of such data requires significant computational burdens. The latter is the case of support vector machines training, because in its dual quadratic formulation a column of the matrix Q requires in general O(n 2 ) nonlinear calculations [28,30]. Nonetheless, we underline that the above scheme is not directly applicable to the support vector machines quadratic formulation due to the presence of a coupling linear constraint (m = 1) [31].…”
Section: Motivationmentioning
confidence: 99%
“…In particular, in the dual formulation of the training problem of support vector machines the presence of a linear equality constraint (i.e. a coupling constraint) over all the variables makes the distribution of the data uneasy [26,28,30,31,33].…”
Section: Introductionmentioning
confidence: 99%
“…A proper prediction model trained for classification should be able to reconstruct the unknown class membership (output) for any given sample (input). The most commonly used ML techniques for classifications are Decision Trees 47 , Support Vector Machines (SVMs) 9,35,36 , ANNs 7,20,23 , and recently Deep Neural Networks (DNNs) 31 . Even if all the above methods have been often applied to classification task in healthcare domains 21,27,34,53 , the Neural Networks based ones seem to be the more suited to capture the complicated and hidden nonlinear relationship between the input and the output, and are therefore the most used in this kind of applications.…”
Section: Introductionmentioning
confidence: 99%