2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2019
DOI: 10.1109/allerton.2019.8919690
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Zero-Order Algorithms for Nonconvex Multi-Agent optimization

Abstract: Distributed multi-agent optimization is the core of many applications in distributed learning, control, estimation, etc. Most existing algorithms assume knowledge of first-order information of the objective and have been analyzed for convex problems. However, there are situations where the objective is nonconvex, and one can only evaluate the function values at finitely many points. In this paper we consider derivative-free distributed algorithms for nonconvex multi-agent optimization, based on recent progress… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
30
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(31 citation statements)
references
References 40 publications
1
30
0
Order By: Relevance
“…Gradient-free optimization methods have a long history [19] and have an evident advantage since computing a function value is much simpler than computing its gradient. Gradient-free optimization methods have gained renewed interests in recent years, e.g., [20]- [23]. Essentially, bandit online convex optimization is a gradientfree method to solve convex optimization problems.…”
mentioning
confidence: 99%
“…Gradient-free optimization methods have a long history [19] and have an evident advantage since computing a function value is much simpler than computing its gradient. Gradient-free optimization methods have gained renewed interests in recent years, e.g., [20]- [23]. Essentially, bandit online convex optimization is a gradientfree method to solve convex optimization problems.…”
mentioning
confidence: 99%
“…On the other hand, noting that it has been shown in [9], [39], [40] that global optima of nonconvex optimization can be linearly found if the global cost function satisfies the Polyak-Łojasiewicz (P-Ł) condition, another core theoretical question arises.…”
Section: A Related Work and Motivationmentioning
confidence: 99%
“…Yet, we must point out that there exist various different alternatives for distributed optimization, such as gossip algorithms [94], as well as some new alternatives including for instance distributed second-order Newton methods [95], distributed zeroorder optimization methods [96] and parallel coordinate descent [97], etc. Depending on the task specification, these alternatives might be more effective than ADMM.…”
Section: B Distributed Training Of the Learning Modelsmentioning
confidence: 99%