2021
DOI: 10.1007/978-3-030-92121-7_2
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Optimize Black-Box Functions with Extreme Limits on the Number of Function Evaluations

Abstract: We consider black-box optimization in which only an extremely limited number of function evaluations, on the order of around 100, are affordable and the function evaluations must be performed in even fewer batches of a limited number of parallel trials. This is a typical scenario when optimizing variable settings that are very costly to evaluate, for example in the context of simulation-based optimization or machine learning hyperparameterization. We propose an original method that uses established approaches … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…Across the board, the post hoc method works well. This is likely due our extremely small number of evaluations for algorithm configuration [25], leading to more benefits for exploration. The online method typically performs poorly in this setting and it is possible that more sophisticated bandit algorithms [58] could perform better.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Across the board, the post hoc method works well. This is likely due our extremely small number of evaluations for algorithm configuration [25], leading to more benefits for exploration. The online method typically performs poorly in this setting and it is possible that more sophisticated bandit algorithms [58] could perform better.…”
Section: Discussionmentioning
confidence: 99%
“…Since robotics applications and datasets are reasonably expensive operations (compared to purely synthetic tasks), our work is related to work on extremely few function evaluations, typically on the order of 100 [25]. Typically this is studied in the Machine Learning community as hyperparameter search, finding configurations for ideal neural network configuration and training [26], [18].…”
Section: Related Workmentioning
confidence: 99%
“…We then assessed the accuracy of the training phase using the validation set. A Bayesian algorithm [25] was chosen for hyperparameter tuning. To tune the model, we adjusted the number of trees, feature fraction, learning rate, and the maximum depth of the trees.…”
Section: Methods Detailsmentioning
confidence: 99%
“…To run Differential Evolution algorithms on the BBOB-functions, we make use of the pyade package 25 , which is a python-based implementation used in the field (Ans ótegui et al, 2021;Nieto et al, 2021) that incorporates several variants of DE, including SHADE and L-SHADE employed for this study -see Section 2.2 for their description. We made some minor modifications to the base-code of pyade:…”
Section: Setup For the Experimentation On The Bbob Suitementioning
confidence: 99%