2013
DOI: 10.1109/tac.2012.2209984
|View full text |Cite
|
Sign up to set email alerts
|

Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization

Abstract: We introduce a new framework for the convergence analysis of a class of distributed constrained non-convex optimization algorithms in multi-agent systems. The aim is to search for local minimizers of a non-convex objective function which is supposed to be a sum of local utility functions of the agents. The algorithm under study consists of two steps: a local stochastic gradient descent at each agent and a gossip step that drives the network of agents to a consensus. Under the assumption of decreasing stepsize,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

3
234
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 223 publications
(237 citation statements)
references
References 40 publications
3
234
0
Order By: Relevance
“…First, using earlier results on distributed stochastic approximation due to [22], we show that the sequence obtained from (19) is related to a mean ordinary differential equation (ODE). Then, we show that this ODE is similar to the one analyzed by [2] for the single-agent CE algorithm.…”
Section: Convergence Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…First, using earlier results on distributed stochastic approximation due to [22], we show that the sequence obtained from (19) is related to a mean ordinary differential equation (ODE). Then, we show that this ODE is similar to the one analyzed by [2] for the single-agent CE algorithm.…”
Section: Convergence Analysismentioning
confidence: 99%
“…Nevertheless, there are much fewer studies on nonconvex optimization. Although there are some useful theoretical works [22,23] analyzing the convergence of diffusion and consensus first-order methods over nonconvex cost functions (indeed, we use results from [22] in this paper), first-order methods are not effective for optimizing blackbox multidimensional multi-extrema objectives. where a; is a vector of length M and X C 5ft is a non-empty compact set of solutions.…”
Section: Introductionmentioning
confidence: 99%
“…Most of existing algorithms are based on discrete-time dynamics (see e.g., [5]- [10], [21]). By designing the consensusbased dynamics, these discrete-time algorithms can find the solution of the optimization problem.…”
Section: Introductionmentioning
confidence: 99%
“…Convergence to the same optimal solution is proved for the cases when the weights were constant and equal and when the weights were uniform but all agents had the same constraint set. Further distributed algorithm for set constrained optimization was investigated in Bianchi and Jakubowicz [34] and Lou et al [35]. To work out the distributed optimization problems with 2 Complexity asynchronous step-sizes or inequality-equality constraints, distributed Lagrangian and penalty primal-dual subgradient algorithms were developed in Zhu and Martinez [36] and Towfic and Sayed [37].…”
Section: Introductionmentioning
confidence: 99%