2013 Asilomar Conference on Signals, Systems and Computers 2013
DOI: 10.1109/acssc.2013.6810364
|View full text |Cite
|
Sign up to set email alerts
|

Parallel and distributed sparse optimization

Abstract: This paper proposes parallel and distributed algorithms for solving very large-scale sparse optimization problems on computer clusters and clouds. Modern datasets usually have a large number of features or training samples, and they are usually stored in a distributed manner. Motivated by the need of solving sparse optimization problems with large datasets, we propose two approaches including (i) distributed implementations of prox-linear algorithms and (ii) GRock, a parallel greedy block coordinate descent me… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
90
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 78 publications
(90 citation statements)
references
References 10 publications
0
90
0
Order By: Relevance
“…t } generated by (5) with the simplified exact line search (12) or the simplifed successive line search (14) is a stationary point of (1).…”
Section: Theorem 1 Any Limit Point Of the Sequence {Xmentioning
confidence: 99%
See 2 more Smart Citations
“…t } generated by (5) with the simplified exact line search (12) or the simplifed successive line search (14) is a stationary point of (1).…”
Section: Theorem 1 Any Limit Point Of the Sequence {Xmentioning
confidence: 99%
“…Thus the proposed line search methods (12) and (14) generally yields faster convergence than state-of-theart decreasing stepsizes (6) and are easier to implement than state-of-the-art line search techniques (8) and (9), as will be illustrated in the next section for the example of the LASSO problem in sparse signal estimation.…”
Section: Sketch Of Proofmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus investigations are made on how the sparse estimation problem can be decomposed allowing for solution using multiple processors running in parallel. Based on [40], there can be three types of data distribution for the dictionary matrix namely row block distribution, column block distribution and general block distribution are illustrated, as shown in Fig. 4.2 below.…”
Section: 1 Problem Decompositionmentioning
confidence: 99%
“…They present theoretical estimates of speed-up factor of parallelization. Peng et al [12] proposed a greedy block-coordinate descent method, which selects the next coordinate to update based on the estimate of the expected improvement in the objective. They found their GRock algorithm to be superior over parallel FISTA and ADMM.…”
mentioning
confidence: 99%