2021
DOI: 10.48550/arxiv.2102.01121
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Distributed Zero-Order Optimization under Adversarial Noise

Arya Akhavan,
Massimiliano Pontil,
Alexandre B. Tsybakov

Abstract: We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network of connections. We propose a distributed zero-order projected gradient descent algorithm to solve this problem. Exchange of information within the network is permitted only between neighbouring nodes. A key feature of the algorithm is that it can query only function values, subject to a general noise model… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 24 publications
(48 reference statements)
0
3
0
Order By: Relevance
“…Moreover, under different zeroth-order oracles, we show that our learning framework always exhibits a reduced variance of the estimated gradient compared with GVF-based policy evaluation. Note that most of the existing distributed ZOO algorithms [30][31][32][33] essentially evaluate policies via GVFs. In [34,35],…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, under different zeroth-order oracles, we show that our learning framework always exhibits a reduced variance of the estimated gradient compared with GVF-based policy evaluation. Note that most of the existing distributed ZOO algorithms [30][31][32][33] essentially evaluate policies via GVFs. In [34,35],…”
Section: Introductionmentioning
confidence: 99%
“…Multi-agent networks are one of the most representative systems that have broad applications and usually induce largesize optimization problems [12]. In recent years, distributed zeroth-order convex and non-convex optimizations on multi-agent networks have been extensively studied, e.g., [13]- [17], all of which decompose the original cost function into multiple functions and assign them to the agents. Unfortunately, the variable dimension for each agent is the same as that for the original problem.…”
Section: Introductionmentioning
confidence: 99%
“…• In comparison to centralized ZOO [2]- [4], [9], [11] and distributed ZOO [13]- [17], we reduce the variable dimension for each agent, and construct local cost involving only partial agents. Such a framework avoids the influence of the convergence error and convergence rate of the consensus algorithm, and results in reduced variance and high scalability to large-scale networks.…”
Section: Introductionmentioning
confidence: 99%