2021
DOI: 10.48550/arxiv.2106.08469
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Distributed Optimization over Time-varying Graphs with Imperfect Sharing of Information

Abstract: We study strongly convex distributed optimization problems where a set of agents are interested in solving a separable optimization problem collaboratively. In this paper, we propose and study a two time-scale decentralized gradient descent algorithm for a board class of lossy sharing of information over time-varying graphs. One time-scale fades out the (lossy) incoming information from neighboring agents, and one time-scale regulates the local loss functions' gradients. For strongly convex loss functions, wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(8 citation statements)
references
References 27 publications
(62 reference statements)
0
8
0
Order By: Relevance
“…In this work, the communication between the agents is assumed to be imperfect. We adapt the general framework of the noisy sharing of information introduced in [18] as described below. Given the states x i (t) of agents i ∈ [n] at time t, we assume that each agent has access to an imperfect weighted average of its in-neighbours states, denoted by xi (t) given by xi…”
Section: Problem Statementmentioning
confidence: 99%
See 3 more Smart Citations
“…In this work, the communication between the agents is assumed to be imperfect. We adapt the general framework of the noisy sharing of information introduced in [18] as described below. Given the states x i (t) of agents i ∈ [n] at time t, we assume that each agent has access to an imperfect weighted average of its in-neighbours states, denoted by xi (t) given by xi…”
Section: Problem Statementmentioning
confidence: 99%
“…Regarding the local cost function, we assume that agent i ∈ [n] has access to the gradient ∇f i (x i (t)) of its local cost function f i (•) at each local decision variable x i (t) at time t. Inspired by [18], we present the update rule in this work as…”
Section: Problem Statementmentioning
confidence: 99%
See 2 more Smart Citations
“…In particular, this is used in the context of distributed averaging/consensus problem [12,23,24], where each node has an initial numerical value and aims at evaluating the average of all initial values using exchange of quantized information, over a fixed or a time-varying network. In the context of distributed optimization, various compression approaches have been introduced to mitigate the communication overhead [25][26][27][28][29].…”
Section: Introductionmentioning
confidence: 99%