2016
DOI: 10.1016/j.sysconle.2016.07.009
|View full text |Cite
|
Sign up to set email alerts
|

Primal–dual algorithm for distributed constrained optimization

Abstract: The paper studies a distributed constrained optimization problem, where multiple agents connected in a network collectively minimize the sum of individual objective functions subject to a global constraint being an intersection of the local constraint sets assigned to the agents. Based on the augmented Lagrange method, a distributed primal-dual algorithm with a projection operation included is proposed to solve the problem. It is shown that with appropriately chosen constant step size, the local estimates deri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
91
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 119 publications
(96 citation statements)
references
References 24 publications
(62 reference statements)
1
91
0
1
Order By: Relevance
“…This can be seen from the following consideration: i) If σ j,k ≤ σ i,k ∀j ∈ N i (k), then from (6) we deriveσ i,k = σ i,k . Since σ i,k+1 > σ i,k , by (9) it follows that x i,k+1 > Mσ i,k , and hence from (8) we derive x i,k+1 = x * . ii) If there exists j ∈ N i (k) such that σ j,k > σ i,k , then from (6) we deriveσ i,k = max j∈Ni(k) σ j,k > σ i,k , and from (7) we have x i,k+1 = x * .…”
Section: A Dsaawetmentioning
confidence: 88%
See 2 more Smart Citations
“…This can be seen from the following consideration: i) If σ j,k ≤ σ i,k ∀j ∈ N i (k), then from (6) we deriveσ i,k = σ i,k . Since σ i,k+1 > σ i,k , by (9) it follows that x i,k+1 > Mσ i,k , and hence from (8) we derive x i,k+1 = x * . ii) If there exists j ∈ N i (k) such that σ j,k > σ i,k , then from (6) we deriveσ i,k = max j∈Ni(k) σ j,k > σ i,k , and from (7) we have x i,k+1 = x * .…”
Section: A Dsaawetmentioning
confidence: 88%
“…By (48) and (49), we obtain σ i,k+1 =σ i,k = σ ∀k ≥ k 0 ∀i ∈ V. Thus, for any k ≥ k 0 and any i ∈ V, we have that x i,k+1 ≤ M σ by (9), and x i,k+1 = x i,k+1 by (8). Then for any i ∈ V, {x i,k } is bounded and (14) follows from (50).…”
Section: B Local Properties Along Convergent Subsequencesmentioning
confidence: 90%
See 1 more Smart Citation
“…Article [1] studied nonuniform convex constraints but the communication graph is complete and all the edge weights are assumed to be equal. Founded on [1], articles [24], [25] gave some results on nonuniform convex constraints but the communication graph is constant and connected, and the objective functions are assumed to be strongly convex or some intermediate variables need be transmitted besides the agent states. Article [29] studied a distributed optimization problem with nonuniform convex constraints and gave conditions to guarantee the optimal convergence of the team objective function, but the subgradients and the convex constraint sets are both bounded.…”
Section: Introductionmentioning
confidence: 99%
“…This latter feature can speed up practical convergence. • We show by means of a counterexample the necessity [11] x [17], [19] [7], [14], [20] [6], [12], [15] x With (sub)gradient averaging [9], [18] [9], [18] x x [8], [13], [16] x our work x of developing a new algorithmic machinery to capture the case of different constraint sets per agent, as a direct adaptation of the algorithm in [8] may fail to converge if agents are subject to different constraint sets. • We show that the iterates generated by our proposed algorithm converge to some minimizer of the centralized problem counterpart for step sizes of the form c(k) = η k+1 , η > 0, while we also establish a convergence rate of O( log k √ k ) for convergence in value and step size of the form c(k) = η √ k+1 .…”
Section: Introductionmentioning
confidence: 99%