2021
DOI: 10.48550/arxiv.2101.03821
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improved Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandit

Abstract: We consider β-smooth (satisfies the generalized Hölder condition with parameter β > 2) stochastic convex optimization problem with zero-order one-point oracle. The best known result was [1]:in γ-strongly convex case, where n is the dimension. In this paper we improve this bound:This work is based on results achieved by 63 Conference MIPT held in November 2020.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
12
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
5

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(13 citation statements)
references
References 4 publications
1
12
0
Order By: Relevance
“…The recent work[21] obtains the same improvement, using the gradient estimator of[2]. However as we notice below that estimator is less computationally appealing.…”
mentioning
confidence: 64%
“…The recent work[21] obtains the same improvement, using the gradient estimator of[2]. However as we notice below that estimator is less computationally appealing.…”
mentioning
confidence: 64%
“…Note, that instead of finite-difference approximation approach in some applications we can use kernel approach [43,3]. The interest to this alternative has grown last time [2,39].…”
Section: Gradient-free Methodsmentioning
confidence: 99%
“…Many more examples are possible, for example, as sketched in [CSV09, Section 1.2] one could consider tuning the regularization parameter in ridge regression. Comparing to for example the numerical experiment from [NG21] is less insightful as their smoothing parameter relies on the variance of the noise.…”
Section: Non-convex Optimizationmentioning
confidence: 99%