2014
DOI: 10.1109/tac.2014.2308612
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Constrained Optimization by Consensus-Based Primal-Dual Perturbation Method

Abstract: Various distributed optimization methods have been developed for solving problems which have simple local constraint sets and whose objective function is the sum of local cost functions of distributed agents in a network. Motivated by emerging applications in smart grid and distributed sparse regression, this paper studies distributed optimization methods for solving general problems which have a coupled global cost function and have inequality constraints. We consider a network scenario where each agent has n… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
304
0
1

Year Published

2016
2016
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 358 publications
(305 citation statements)
references
References 41 publications
(97 reference statements)
0
304
0
1
Order By: Relevance
“…In [27], energy storage is optimally controlled through ICC. In [28], a primal-dual perturbed sub-gradient method is applied locally while averaging consensus is applied to estimate the global cost functions and constaints.…”
Section: Introductionmentioning
confidence: 99%
“…In [27], energy storage is optimally controlled through ICC. In [28], a primal-dual perturbed sub-gradient method is applied locally while averaging consensus is applied to estimate the global cost functions and constaints.…”
Section: Introductionmentioning
confidence: 99%
“…A notable recent exception is [10], where the global cost function is not separable. In [10], it is assumed that each agent knows the global cost function, but only has access to its local decision variables and local constraint set.…”
Section: Introductionmentioning
confidence: 99%
“…In [10], it is assumed that each agent knows the global cost function, but only has access to its local decision variables and local constraint set. Furthermore, [10] assumes a global coupled inequality constraint, where each agent knows its (functional) contribution to the global coupled constraint. In this setting, [10] presents a distributed optimization algorithm based on neighbor-to-neighbor communication.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations