2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2016
DOI: 10.1109/icassp.2016.7472455
|View full text |Cite
|
Sign up to set email alerts
|

Quantized consensus ADMM for multi-agent distributed optimization

Abstract: Multi-agent distributed optimization over a network minimizes a global objective formed by a sum of local convex functions using only local computation and communication. We develop and analyze a quantized distributed algorithm based on the alternating direction method of multipliers (ADMM) when inter-agent communications are subject to finite capacity and other practical constraints. While existing quantized ADMM approaches only work for quadratic local objectives, the proposed algorithm can deal with more ge… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
27
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(27 citation statements)
references
References 28 publications
0
27
0
Order By: Relevance
“…This rounding quantizer is a -compressor with = d · ∆ 2 /4, [27], [28], [29], [30]. In addition, if gradients are bounded, the sign compressor [4], the K-greedy quantizer [6] and the dynamic gradient quantizer [2], [6] are all -compressors.…”
Section: Definition 1 Only Requires Bounded Magnitude Of the Compressmentioning
confidence: 99%
“…This rounding quantizer is a -compressor with = d · ∆ 2 /4, [27], [28], [29], [30]. In addition, if gradients are bounded, the sign compressor [4], the K-greedy quantizer [6] and the dynamic gradient quantizer [2], [6] are all -compressors.…”
Section: Definition 1 Only Requires Bounded Magnitude Of the Compressmentioning
confidence: 99%
“…Many existing algorithms to solve distributed optimization in multi-agent networks generally comprise two parts, see e.g. [6]- [13] and the references therein. One is to drive all agents to reach a consensus, and the other is to push the consensus value toward an optimal solution of the optimization problem.…”
Section: Introductionmentioning
confidence: 99%
“…In this case, each agent can only access one bit of relative state information from each of its neighbors. Clearly, this is very different from the quantized settings in [10]- [13], which use the quantized version of the absolute state, and dynamic quantizers are essential for computing an exact optimal solution [10]. With static quantizers, each node can only find a sub-optimal solution [11]- [13].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, they do not specifically consider distributed processing of graph signals with limited communication between nodes. There are also many works that focused on solving consensus problems in a network subject to quantized communication [11,12,13,14,15,16], but they merely focus on average computation, and not more general processing tasks.…”
Section: Introductionmentioning
confidence: 99%