2015
DOI: 10.1109/tsp.2015.2461520
|View full text |Cite
|
Sign up to set email alerts
|

A Proximal Gradient Algorithm for Decentralized Composite Optimization

Abstract: This paper proposes a decentralized algorithm for solving a consensus optimization problem defined in a static networked multi-agent system, where the local objective functions have the smooth+nonsmooth composite form. Examples of such problems include decentralized constrained quadratic programming and compressed sensing problems, as well as many regularization problems arising in inverse problems, signal processing, and machine learning, which have decentralized applications. This paper addresses the need fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
283
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 264 publications
(288 citation statements)
references
References 29 publications
4
283
0
Order By: Relevance
“…Moreover, we can observe from the numerical results that P-ExtraPush can generally accept a larger range of step size than PG-ExtraPush. Similar phenomenon between P-EXTRA and PG-EXTRA is also observed and verified in [16]. Moreover, we show the potential of PG-ExtraPush for solving the decentralized nonconvex regularized optimization problems.…”
Section: Resultssupporting
confidence: 83%
See 2 more Smart Citations
“…Moreover, we can observe from the numerical results that P-ExtraPush can generally accept a larger range of step size than PG-ExtraPush. Similar phenomenon between P-EXTRA and PG-EXTRA is also observed and verified in [16]. Moreover, we show the potential of PG-ExtraPush for solving the decentralized nonconvex regularized optimization problems.…”
Section: Resultssupporting
confidence: 83%
“…Therefore, PG-ExtraPush reduces to PG-EXTRA [16], a recent algorithm for composite consensus optimization over undirected networks.…”
Section: Special Cases: Pg-extra Extrapush and P-extrapushmentioning
confidence: 99%
See 1 more Smart Citation
“…The works [27]- [29] developed a class of D. Yuan distributed optimization algorithms that are built on mirror descent, which generalize the projection step by using the Bregman divergence. Different from the aforementioned works that deal only with non-composite objective functions, the authors in [16], [31] considered a decentralized composite optimization problem where the local objective function of every node is composed of a smooth function and a nonsmooth regularizer. This problem naturally arises in many real applications including distributed estimation in sensor networks [4], [10], distributed quadratic programming [31], and distributed machine learning [30], [32], to name a few.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, this rate of convergence is the same as that of [11], where the objective function is non-composite and the algorithm is gradient descent based. Different from the works [3], [16], [31], where the objective functions are time-invariant and the algorithm is gradient descent based, algorithm ODCMD is online and based on mirror descent.…”
Section: Introductionmentioning
confidence: 99%