IEEE Conference on Decision and Control and European Control Conference 2011
DOI: 10.1109/cdc.2011.6160605
|View full text |Cite
|
Sign up to set email alerts
|

Newton-Raphson consensus for distributed convex optimization

Abstract: Abstract-We study the problem of unconstrained distributed optimization in the context of multi-agents systems subject to limited communication connectivity. In particular we focus on the minimization of a sum of convex cost functions, where each component of the global function is available only to a specific agent and can thus be seen as a private local cost. The agents need to cooperate to compute the minimizer of the sum of all costs. We propose a consensus-like strategy to estimate a Newton-Raphson descen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
95
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 82 publications
(95 citation statements)
references
References 37 publications
0
95
0
Order By: Relevance
“…In this subsection we characterize the best convergence speed of the Lagrangian method for problem (5). We then compare such a bound to the Fast-Lipschitz method in the following subsection.…”
Section: B Lagrangian Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this subsection we characterize the best convergence speed of the Lagrangian method for problem (5). We then compare such a bound to the Fast-Lipschitz method in the following subsection.…”
Section: B Lagrangian Methodsmentioning
confidence: 99%
“…Since these approaches rely on subgradients or Lagrangian methods, they usually have slow rates of convergence. Recent works have attempted to speed up convergence by considering higher order methods [5], [6]. However, updating the dual variables requires a broadcast and collect iterative message exchanging from all nodes of the network.…”
Section: Introductionmentioning
confidence: 99%
“…convex functions, each representing a private local cost only available to a single agent, subject to some convex constraints. Some of the recent literature on distributed optimization algorithm design includes distributed algorithms implemented both in discretetime [5][6][7][8][9] and continuous-time [10][11][12][13][14]. Although some of these algorithms can solve the optimal resource allocation problem (1), they require each agent to keep and evolve a copy of the global decision variable of the problem which is of order N, where N is the size of network.…”
mentioning
confidence: 99%
“…[17]). Singularly perturbed distributed algorithms are used in [12] for unconstrained in-network convex optimization, and in [18] for dynamic consensus problem over networked systems.…”
mentioning
confidence: 99%
“…Dual decomposition and subgradient methods can then be applied, recent works are Nedic and Ozdaglar (2009);Rantzer (2009);Boyd (2010); Sundhar Ram et al (2010). As these methods typically have slow rates of convergence, recent papers explores higher order methods, see Zanella et al (2011);Wei et al (2011). Other methods are based on the consensus iterations, where nodes update their decision variables as a weighted sum of other decision variables in the neighborhood, see, e.g., Notarstefano and Bullo (2009) ;Olshevsky and Tsitsiklis (2009) ;Nedic et al (2010); Chiuso et al (2011).…”
Section: Introductionmentioning
confidence: 99%