2020
DOI: 10.48550/arxiv.2003.13001
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling

Abstract: We consider the problem of minimizing a high-dimensional objective function, which may include a regularization term, using (possibly noisy) evaluations of the function. Such optimization is also called derivative-free, zeroth-order, or black-box optimization. We propose a new Zeroth-Order Regularized Optimization method, dubbed ZORO. When the underlying gradient is approximately sparse at an iterate, ZORO needs very few objective function evaluations to obtain a new iterate that decreases the objective functi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
37
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(37 citation statements)
references
References 23 publications
0
37
0
Order By: Relevance
“…Similar ideas can be found in [26,27,28]. More recently, [29] [30,31,6,32,33,34] for recent progress in overcoming this.…”
Section: Prior Artmentioning
confidence: 63%
See 1 more Smart Citation
“…Similar ideas can be found in [26,27,28]. More recently, [29] [30,31,6,32,33,34] for recent progress in overcoming this.…”
Section: Prior Artmentioning
confidence: 63%
“…This setting is known as derivative-free optimization (DFO). DFO has a long tradition and recently revived for reinforcement learning [1,2,3], hyperparameter tuning [4], and adversarial attacks on neural network based classifiers [5,6]. Common to all such applications is the assumption that evaluating f (x) is expensive, time consuming or inconvenient.…”
Section: Introductionmentioning
confidence: 99%
“…Besides the related work discussed above, it is worth noting that a recent paper [19] uses compressed sensing for zeroth-order optimization, which exhibits a mathematical structure similar to this study. However, [19] considers the centralized setting and only establishes convergence to a neighborhood of the minimizer.…”
Section: Introductionmentioning
confidence: 82%
“…Such undesirable characteristics render many efficient algorithms, such as the stochastic first-order methods, no longer directly applicable. As a remedy to this, zeroth-order optimization (ZOO), also known as black-box or derivative-free optimization [4], has attracted the attention of many researchers.…”
mentioning
confidence: 99%
“…Closely related to this work is the gradient estimation-based ZOO framework discussed by seminal works such as [14,18,1,6,7], where theoretical underpinnings for both convex and nonconvex cases have been established. However, as pointed out by [4,3], the performance of almost all existing ZOO algorithms deteriorates rapidly as the problem dimensionality d increases. In particular, for convex ZOO with fixed σ and L, such an algorithm achieves a query complexity O(d 2 /ǫ 2 ) of the zeroth-order oracles 1 according to [14].…”
mentioning
confidence: 99%