2016
DOI: 10.11648/j.acm.20160505.15
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Subgradient Algorithm for Multi-Agent Convex Optimization with Global Inequality and Equality Constraints

Abstract: Abstract:In this paper, we present an improved subgradient algorithm for solving a general multi-agent convex optimization problem in a distributed way, where the agents are to jointly minimize a global objective function subject to a global inequality constraint, a global equality constraint and a global constraint set. The global objective function is a combination of local agent objective functions and the global constraint set is the intersection of each agent local constraint set. Our motivation comes fro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 17 publications
1
1
0
Order By: Relevance
“…In terms of the theoretical analysis, whether the set constraints are the same or not does not have a significant influence, as long as the optimal solution set still lies in the intersection of these set constraints (i.e., x ∈ X = ∩ N i X i ). Hence, similar to the work in [34,35,42…”
Section: Problem Formulationsupporting
confidence: 74%
See 1 more Smart Citation
“…In terms of the theoretical analysis, whether the set constraints are the same or not does not have a significant influence, as long as the optimal solution set still lies in the intersection of these set constraints (i.e., x ∈ X = ∩ N i X i ). Hence, similar to the work in [34,35,42…”
Section: Problem Formulationsupporting
confidence: 74%
“…and the distributed penalty primal-dual subgradient method in [41][42][43]. The work in [44] proposed an accelerated distributed Nesterov gradient descent for convex and smooth functions, while Yuan et al in [45] developed an optimal distributed stochastic mirror descent for strongly convex functions.…”
Section: Literature Reviewmentioning
confidence: 99%