2019
DOI: 10.48550/arxiv.1910.08701
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks

Abstract: We study distributed stochastic gradient (D-SG) method and its accelerated variant (D-ASG) for solving decentralized strongly convex stochastic optimization problems where the objective function is distributed over several computational units, lying on a fixed but arbitrary connected communication graph, subject to local communication constraints where noisy estimates of the gradients are available. We develop a framework which allows to choose the stepsize and the momentum parameters of these algorithms in a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
35
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(36 citation statements)
references
References 57 publications
1
35
0
Order By: Relevance
“…and τ is chosen as a constant in (0, 1) such as τ = 1 2 . In light of the magnitudes mentioned in (12), inequalities ( 13) can be satisfied easily by choosing the parameters properly.…”
Section: Bounding the Consensus Errorsmentioning
confidence: 99%
See 3 more Smart Citations
“…and τ is chosen as a constant in (0, 1) such as τ = 1 2 . In light of the magnitudes mentioned in (12), inequalities ( 13) can be satisfied easily by choosing the parameters properly.…”
Section: Bounding the Consensus Errorsmentioning
confidence: 99%
“…Since Φ k C is the weighted sum of consensus errors, inequality ( 14) also indicates that the weighted sum of consensus errors is a "Q-linear" sequence with "additional errors" in term of c 2 (γ, τ, p, q, η) ∇F X k − ∇F Q k 2 F . By the magnitudes of the parameters and stepsize mentioned in (12), we have…”
Section: Lemma 5 If the Parameters And Stepsize Satisfymentioning
confidence: 99%
See 2 more Smart Citations
“…Distributed optimization problem has many applications in large-scale machine learning [1], wireless networks [2], parameter estimation [3], to name a few. Over the last decades, numerous algorithms for distributed optimization problem have been developed, such as (sub)gradient method [4][5][6][7][8][9][10], dual averaging method [11,12], primal dual method [13,14], gradient push method [15][16][17], gradient tracking method [18][19][20]. We refer to the survey [21] for the new development on distributed optimization.…”
Section: Introductionmentioning
confidence: 99%