2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP) 2018
DOI: 10.1109/mlsp.2018.8516936
|View full text |Cite
|
Sign up to set email alerts
|

Correcting Boundary Over-Exploration Deficiencies in Bayesian Optimization With Virtual Derivative Sign Observations

Abstract: Bayesian optimization (BO) is a global optimization strategy designed to find the minimum of an expensive black-box function, typically defined on a compact subset of R d , by using a Gaussian process (GP) as a surrogate model for the objective. Although currently available acquisition functions address this goal with different degree of success, an over-exploration effect of the contour of the search space is typically observed. However, in problems like the configuration of machine learning algorithms, the f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 7 publications
(11 reference statements)
0
14
0
Order By: Relevance
“…in areas of high likelihood, leading to sub-optimal posterior estimation whenever the prior is also informative. Comparing Figures 5a and 5b shows that using the expintvar strategy also avoids unnecessary evaluations on the boundary, which is often undesired also in the Bayesian optimisation methods, see Siivola et al [2017] for a discussion. Curiously, the uniform sampling (unif rule) works well when prior information is strong.…”
Section: Strength Of the Priormentioning
confidence: 99%
“…in areas of high likelihood, leading to sub-optimal posterior estimation whenever the prior is also informative. Comparing Figures 5a and 5b shows that using the expintvar strategy also avoids unnecessary evaluations on the boundary, which is often undesired also in the Bayesian optimisation methods, see Siivola et al [2017] for a discussion. Curiously, the uniform sampling (unif rule) works well when prior information is strong.…”
Section: Strength Of the Priormentioning
confidence: 99%
“…It has also been reported that US has a disproportionate tendency toward selecting points on the boundary of the search space, simply because the variance is often largest far away from regions where data has been collected. Whether or not boundary points are informative is still an open question [27,47].…”
Section: Acquisition Functions For Bayesian Experimental Designmentioning
confidence: 99%
“…In practice, we sample q continuous draws from the posterior predictive distribution of the latent variable and select each batch location as a minimum of the corresponding sample. The problem with independent draws is over-exploration of the border (Siivola et al, 2018). If the uncertainty is big on the border or there are border minima, many draws from the same batch are more likely to have minimum exactly on the border resulting to inefficient use of samples.…”
Section: Thompson Sampling For Batchesmentioning
confidence: 99%