2017
DOI: 10.1109/tac.2017.2677879
|View full text |Cite
|
Sign up to set email alerts
|

Convergence Rate of Distributed ADMM Over Networks

Abstract: Abstract-We propose a distributed algorithm based on Alternating Direction Method of Multipliers (ADMM) to minimize the sum of locally known convex functions using communication over a network. This optimization problem emerges in many applications in distributed machine learning and statistical estimation. We show that when functions are convex, both the objective function values and the feasibility violation converge with rate O( 1 T ), where T is the number of iterations. We then show that if the functions … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
146
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 167 publications
(147 citation statements)
references
References 69 publications
1
146
0
Order By: Relevance
“…Proof Since our C‐ADMM is a direct application of ADMM to the dual problem (), we can similarly obtain that as t → ∞ , λ i ( t )→ λ ∗ , λ i ( t )→ λ j ( t ), for any iV,jscriptNfalse(ifalse). Therefore, we still remain to show that any limit point of ( x 1 ( t +1),…, x N ( t +1)) is asymptotically optimal to (), ie, as t → ∞ , Ci()xifalse(t+1false)+λ0, truei=1Nxifalse(t+1false)truei=1NBi0. To show , consider the optimality condition of (15a), ie, 0=Cifalse(xifalse(t+1false)false)+1Miβ()xifalse(t+1false)BijscriptNfalse(ifalse)pjfalse(tfalse)Aji+βjscriptNfalse(ifalse)Aji2λifalse(tfalse)βjscriptNfalse(ifalse)Ajiyjfalse(tfalse)=Cifalse(xifalse(t+1false)false)+λifal...…”
Section: Distributed Dc‐admmmentioning
confidence: 89%
See 4 more Smart Citations
“…Proof Since our C‐ADMM is a direct application of ADMM to the dual problem (), we can similarly obtain that as t → ∞ , λ i ( t )→ λ ∗ , λ i ( t )→ λ j ( t ), for any iV,jscriptNfalse(ifalse). Therefore, we still remain to show that any limit point of ( x 1 ( t +1),…, x N ( t +1)) is asymptotically optimal to (), ie, as t → ∞ , Ci()xifalse(t+1false)+λ0, truei=1Nxifalse(t+1false)truei=1NBi0. To show , consider the optimality condition of (15a), ie, 0=Cifalse(xifalse(t+1false)false)+1Miβ()xifalse(t+1false)BijscriptNfalse(ifalse)pjfalse(tfalse)Aji+βjscriptNfalse(ifalse)Aji2λifalse(tfalse)βjscriptNfalse(ifalse)Ajiyjfalse(tfalse)=Cifalse(xifalse(t+1false)false)+λifal...…”
Section: Distributed Dc‐admmmentioning
confidence: 89%
“…We will use an implementation of ADMM algorithm, which separates each constraint associated with a node into multiple constraints that involve only the variable associated with one of the neighboring nodes. We use a reformulation technique introduced in the work of Bertsekas and Tsitsiklis to separate optimization variables in a constraint, allowing them to be updated simultaneously.…”
Section: Network Model and Problem Formulationmentioning
confidence: 99%
See 3 more Smart Citations