2020
DOI: 10.48550/arxiv.2006.12093
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MUMBO: MUlti-task Max-value Bayesian Optimization

Abstract: We propose MUMBO, the first high-performing yet computationally efficient acquisition function for multi-task Bayesian optimization. Here, the challenge is to perform efficient optimization by evaluating low-cost functions somehow related to our true target function. This is a broad class of problems including the popular task of multi-fidelity optimization. However, while information-theoretic acquisition functions are known to provide state-of-the-art Bayesian optimization, existing implementations for multi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…That search goes under the name of Outputspace Predictive Entropy Search (OPES) (Hoffman and Ghahramani, 2015) or Max-value Entropy Search (MES) (Wang and Jegelka, 2017). And MES has been recently generalized to multi-fidelity BO (Takeno et al, 2020;Moss et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…That search goes under the name of Outputspace Predictive Entropy Search (OPES) (Hoffman and Ghahramani, 2015) or Max-value Entropy Search (MES) (Wang and Jegelka, 2017). And MES has been recently generalized to multi-fidelity BO (Takeno et al, 2020;Moss et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Second, we assume that all channels of observations are of equal cost. Third, we focus on the use of DC, in contrast to (Takeno et al, 2020;Moss et al, 2020) that solely considered MI as the measure of information gain. Fourth, we incorporate integral observations of a black-box function, which has not been considered before in the literature of multi-fidelity BO.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, FlowMO will allow us to apply the extended Bayesian optimization methods designed for GP models to further improve efficiency in automatic wet-lab workflows [MacLeod et al, 2020]. Potential extensions include batch [González et al, 2016], multitask [Swersky et al, 2013], multi-fidelity [Moss et al, 2020c] and multi-objective optimization, as well as optimization with controllable experimental noise [Moss et al, 2020b].…”
Section: Future Workmentioning
confidence: 99%
“…A significant effort has been conducted by the community for solving a wide variety of continuous, combinatorial, single-objective and multiobjective optimization problems through the perspective of EM [9,10,11,12]. Another research direction for dealing with multitasking in the context of TO is multitask Bayesian optimization [13], which extends Bayesian optimization approaches to multitasking environments [14,15,16,17]. Despite falling out of the focus of this paper due to its non-evolutionary nature, we note that Bayesian solvers, along those within EM, constitute the core of the contributions reported in the field of multitasking, with a significantly higher presence of EM methods.…”
Section: Introductionmentioning
confidence: 99%