2016
DOI: 10.48550/arxiv.1602.05149
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Parallel Bayesian Global Optimization of Expensive Functions

Abstract: We consider parallel global optimization of derivative-free expensive-to-evaluate functions, and propose an efficient method based on stochastic approximation for implementing a conceptual Bayesian optimization algorithm proposed by Ginsbourger et al. (2007). At the heart of this algorithm is maximizing the information criterion called the "multi-points expected improvement", or the q-EI.To accomplish this, we use infinitessimal perturbation analysis (IPA) to construct a stochastic gradient estimator and show … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
41
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 24 publications
(41 citation statements)
references
References 39 publications
0
41
0
Order By: Relevance
“…Our method for optimizing the EI-CF acquisition function uses an unbiased estimator of the gradient of EI-CF within a multistart stochastic gradient ascent framework. This technique is structurally similar to methods developed for optimizing acquisition functions in other BO settings without composite objectives, including the parallel expected improvement Wang et al (2016) and the parallel knowledge-gradient Wu & Frazier (2016).…”
Section: Related Methodological Literaturementioning
confidence: 99%
“…Our method for optimizing the EI-CF acquisition function uses an unbiased estimator of the gradient of EI-CF within a multistart stochastic gradient ascent framework. This technique is structurally similar to methods developed for optimizing acquisition functions in other BO settings without composite objectives, including the parallel expected improvement Wang et al (2016) and the parallel knowledge-gradient Wu & Frazier (2016).…”
Section: Related Methodological Literaturementioning
confidence: 99%
“…However, because we do not have access to f (x), we can instead estimate the expected utility E ŷ∼p(y|x,D) [u(y + , ŷ)] in place of u(y + , f (x)). If we adopt improvement over y + as our utility function, our expected utility expression becomes the multi-point expected improvement (b-EI) batch acquisition function (Wang et al, 2016a). Similarly, if we use binarized improvement as our utility, our expected utility will reproduce the multi-point probability of improvement (b-PI) batch acquisition function (Wang et al, 2016a).…”
Section: Scoring Search Spaces Given Budgetsmentioning
confidence: 99%
“…If we adopt improvement over y + as our utility function, our expected utility expression becomes the multi-point expected improvement (b-EI) batch acquisition function (Wang et al, 2016a). Similarly, if we use binarized improvement as our utility, our expected utility will reproduce the multi-point probability of improvement (b-PI) batch acquisition function (Wang et al, 2016a). Of course unlike with the batch acquisition functions common in Bayesian optimization, we are not trying to find the location of the next batch of query points, but are instead predicting the score of a search space by averaging over batches of points within that search space.…”
Section: Scoring Search Spaces Given Budgetsmentioning
confidence: 99%
“…Marmin et al [44,45] proposed a analytical simplification for q-EI to avoid high-dimensional integration in large batch based on gradient-based ascent algorithms. Wang et al [46] also employed a gradient-based approach but relying on infinitesimal perturbation analysis to construct a stochastic gradient estimator to simplify the high-dimensional integration in the original q-EI framework. Wang et al [46] and Wu and…”
Section: Sequential Batch Parallelmentioning
confidence: 99%
“…Wang et al [46] also employed a gradient-based approach but relying on infinitesimal perturbation analysis to construct a stochastic gradient estimator to simplify the high-dimensional integration in the original q-EI framework. Wang et al [46] and Wu and…”
Section: Sequential Batch Parallelmentioning
confidence: 99%