2019
DOI: 10.1613/jair.1.11288
|View full text |Cite
|
Sign up to set email alerts
|

Multi-fidelity Gaussian Process Bandit Optimisation

Abstract: In many scientific and engineering applications, we are tasked with the maximisation of an expensive to evaluate black box function f . Traditional settings for this problem assume just the availability of this single function. However, in many cases, cheap approximations to f may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions cheaply and use the expensive evalu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
76
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 78 publications
(77 citation statements)
references
References 23 publications
(39 reference statements)
0
76
0
1
Order By: Relevance
“…A GP model with multiple information sources was first considered in [20]. Since then, optimization with multiple information sources has been considered under strict requirements, such as models forming a hierarchy of increasing accuracy and without considering different costs [21], [22]. More recently, [23] used a myopic policy, called the 'knowledge gradient' by [24], in order determine, which parameters to evaluate.…”
Section: Introductionmentioning
confidence: 99%
“…A GP model with multiple information sources was first considered in [20]. Since then, optimization with multiple information sources has been considered under strict requirements, such as models forming a hierarchy of increasing accuracy and without considering different costs [21], [22]. More recently, [23] used a myopic policy, called the 'knowledge gradient' by [24], in order determine, which parameters to evaluate.…”
Section: Introductionmentioning
confidence: 99%
“…As a final word on computational complexity, we observe that finding the optimum of the merit function calls for multiple estimation of the updated variance b (L) (x|x, l), which in turn calls for the computation of variance reduction terms, such as b (l) (x|x, l), defined by (22). The computationally intensive part of this evaluation is the inversion of the GP matrices in (22). The second one, K (l) , is not depending on the new point x ⇤ such that it can be factorized and inverted once for all during the search of the couple (x ⇤ , l ⇤ ) maximizing the merit function.…”
Section: End End Endmentioning
confidence: 99%
“…The work in [21] presents a MF optimization framework for a continuous set of variable fidelity models. In [22], the authors propose an alternative optimization strategy using a low-fidelity model to cheaply eliminate regions of poor performance and a high-fidelity model in the promising regions only.…”
Section: Introductionmentioning
confidence: 99%
“…Krause and Ong (2011) proposed and analyzed the CGP-UCB algorithm for the contextual GP bandits problem, where the mean reward function corresponding to context-action pairs is modeled as a sample from a GP on the context-action product space. Kandasamy et al (2016) considered a multi-fidelity version of the GP bandits problem in which they assumed the availability of a sequence of approximations of the true function f with increasing accuracies which were cheaper to evaluate. They proposed an extension of GP-UCB called the MF-GP-UCB and derived information-type bounds on its cumulative regret.…”
Section: Prior Workmentioning
confidence: 99%