2020
DOI: 10.1109/tsipn.2019.2957719
|View full text |Cite
|
Sign up to set email alerts
|

Communication-Censored Linearized ADMM for Decentralized Consensus Optimization

Abstract: In this paper, we propose a communication-and computation-efficient algorithm to solve a convex consensus optimization problem defined over a decentralized network. A remarkable existing algorithm to solve this problem is the alternating direction method of multipliers (ADMM), in which at every iteration every node updates its local variable through combining neighboring variables and solving an optimization subproblem. The proposed algorithm, called as communicationcensored linearized ADMM (COLA), leverages a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(15 citation statements)
references
References 47 publications
(89 reference statements)
0
15
0
Order By: Relevance
“…To investigate the communication efficiency, we compare our approaches with state-of-the-art consensus optimization methods: 1) WADMM in [5], where the agent activating order follows a random walk over the network; 2) D-ADMM in [16]; 3) DGD in [8]; and 4) EXTRA in [9], with respect to the accuracy [11], which is defined as…”
Section: A Simulation Setupmentioning
confidence: 99%
See 3 more Smart Citations
“…To investigate the communication efficiency, we compare our approaches with state-of-the-art consensus optimization methods: 1) WADMM in [5], where the agent activating order follows a random walk over the network; 2) D-ADMM in [16]; 3) DGD in [8]; and 4) EXTRA in [9], with respect to the accuracy [11], which is defined as…”
Section: A Simulation Setupmentioning
confidence: 99%
“…In [5]- [11], a few distributed algorithms have been developed to address optimization problem (1). Currently, primal and primal-dual methods are two main widely used solutions, which include, e.g., gradient descent (GD)-based methods and alternating direction method of multipliers (ADMMs)-based methods, respectively.…”
mentioning
confidence: 99%
See 2 more Smart Citations
“…Such metrics that are based on second order beliefs (estimating the estimates of the receiving agents) can provide similar benefits to communication efficiency in other decentralized game-theoretic learning algorithms based on, e.g., gradient descent [21]- [24], best-response [25], ADMM [26], and other adaptive strategies [27]. Indeed, communication-censoring based protocols that rely on some form of novelty of information metrics recently proved viable in reducing communication attempts in distributed stochastic gradient descent [28], [29] and ADMM [30] in the context of optimization. In the class of information exchange protocols considered here, while the novelty of information metric is sender specific, the metric on potential effect of information on other's assessment is receiving agent specific.…”
Section: A Related Literaturementioning
confidence: 99%