2022
DOI: 10.48550/arxiv.2210.05897
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Almost Sure Convergence of Distributed Optimization with Imperfect Information Sharing

Abstract: In this work, we study convex distributed optimization problems where a set of agents are interested in solving a separable optimization problem collaboratively with noisy/lossy information sharing over time-varying networks. We study the almost sure convergence of a two-time-scale decentralized gradient descent algorithm to reach the consensus on an optimizer of the objective loss function. One time scale fades out the imperfect incoming information from neighboring agents, and the second one adjusts the loca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 18 publications
(36 reference statements)
0
4
0
Order By: Relevance
“…The above assumption is specific to the imperfect information sharing setup and is also used in [28], [29], [31], [32].…”
Section: B Assumptionsmentioning
confidence: 99%
See 1 more Smart Citation
“…The above assumption is specific to the imperfect information sharing setup and is also used in [28], [29], [31], [32].…”
Section: B Assumptionsmentioning
confidence: 99%
“…Previous studies have evaluated the effectiveness of two-time scale methods in DFL with noisy channels. However, these investigations were limited by inflexible assumptions such as strong convexity in papers such as [26]- [29]. These assumptions are rarely satisfied in practical and large-scale learning scenarios, which limits the applicability of the proposed methods.…”
mentioning
confidence: 99%
“…This assumption is specific to the imperfect information sharing setup and is considered recently in [26], [29], [30].…”
Section: Assumptionsmentioning
confidence: 99%
“…In this paper, our primary focus is on DFL in the presence of noise in communication channels. Recently, [27]- [30] study the performance of a two-time scale method [31] for DFL with channel noise while requiring the convexity of the objective function, uniformly bounded gradients, and access to the deterministic gradients; note that these three considerations are very restrictive assumptions, especially in emerging settings in large-scale learning.…”
mentioning
confidence: 99%