2020
DOI: 10.1016/j.knosys.2020.105663
|View full text |Cite
|
Sign up to set email alerts
|

Incorporating expert prior in Bayesian optimisation via space warping

Abstract: Bayesian optimisation is a well-known sample-efficient method for the optimisation of expensive black-box functions. However when dealing with big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function. Since the function evaluations are expensive in terms of both money and time, it may be desirable to alleviate this problem. One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation. In its stand… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(17 citation statements)
references
References 20 publications
1
14
0
Order By: Relevance
“…Baselines We empirically evaluate πBO against the most competitive approaches for priors over the optimum described in Section 2.3: BOPrO (Souza et al, 2021) and BO in Warped Space (BOWS) (Ramachandran et al, 2020). To contextualize the performance of πBO, we provide additional, simpler baselines: random sampling, sampling from the prior and BO with prior-based initial design.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Baselines We empirically evaluate πBO against the most competitive approaches for priors over the optimum described in Section 2.3: BOPrO (Souza et al, 2021) and BO in Warped Space (BOWS) (Ramachandran et al, 2020). To contextualize the performance of πBO, we provide additional, simpler baselines: random sampling, sampling from the prior and BO with prior-based initial design.…”
Section: Methodsmentioning
confidence: 99%
“…Additionally, it is restricted to only one specific acquisition function. Ramachandran et al (2020) use the probability integral transform to warp the search space, stretching high-probability regions and shrinking others. While the approach is model-and acquisition function agnostic, it requires invertible priors, and does not empirically display the ability to recover from misleading priors.…”
Section: Learning From Previous Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…Promising work in this direction includes various methods for integrating prior knowledge into Bayesian optimization. This can be achieved by directly specifying priors about the location of the optimum [40,41,42,43], or structural priors, e.g., in the form of log-transformations of hyperparameters [44], monotonicity constraints [45], or warping of hyperparameters [46].…”
Section: Utilization Of Human Model Comprehensionmentioning
confidence: 99%