2022
DOI: 10.48550/arxiv.2204.10974
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DIMIX: DIminishing MIXing for Sloppy Agents

Abstract: We study non-convex distributed optimization problems where a set of agents collaboratively solve a separable optimization problem that is distributed over a time-varying network. The existing methods to solve these problems rely on (at most) one time-scale algorithms, where each agent performs a diminishing or constant step-size gradient descent at the average estimate of the agents in the network. However, if possible at all, exchanging exact information, that is required to evaluate these average estimates,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 41 publications
(79 reference statements)
0
1
0
Order By: Relevance
“…In this paper, we study the distributed convex optimization problems over time-varying networks with imperfect information sharing. We consider the two-time-scale gradient descent method studied in [18,22] to solve the optimization problem. One time-scale adjusts the (imperfect) incoming information from the neighboring agents, and one time-scale controls the local cost functions' gradients.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we study the distributed convex optimization problems over time-varying networks with imperfect information sharing. We consider the two-time-scale gradient descent method studied in [18,22] to solve the optimization problem. One time-scale adjusts the (imperfect) incoming information from the neighboring agents, and one time-scale controls the local cost functions' gradients.…”
Section: Introductionmentioning
confidence: 99%