2013
DOI: 10.48550/arxiv.1302.3092
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: application to distributed MPC

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2014
2014
2015
2015

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Speedup depends on the sparsity of the data matrix that defines the loss functions. A similar synchronous parallel method was studied in [26] and [8]; the latter focuses on the case of g(x) = x 1 . Scherrer et al [36] make greedy choices of multiple blocks of variables to update in parallel.…”
mentioning
confidence: 99%
“…Speedup depends on the sparsity of the data matrix that defines the loss functions. A similar synchronous parallel method was studied in [26] and [8]; the latter focuses on the case of g(x) = x 1 . Scherrer et al [36] make greedy choices of multiple blocks of variables to update in parallel.…”
mentioning
confidence: 99%
“…Doing this accurately is a costly operation; on the other hand, inaccurate setting using cheaper computations (e.g., using the number of non-zero elements as suggested in their work) results in slower convergence (see Section 6). Necoara and Clipici (2013) suggest another variant of parallel coordinate descent in which all the variables are updated in each iteration. HYDRA and GROCK can be considered as two key, distinct methods that represent the set of methods discussed above.…”
Section: Related Workmentioning
confidence: 99%