2020
DOI: 10.1609/aaai.v34i06.6561
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty-Aware Search Framework for Multi-Objective Bayesian Optimization

Abstract: We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations, where the goal is to approximate the true Pareto set of solutions while minimizing the number of function evaluations. For example, in hardware design optimization, we need to find the designs that trade-off performance, energy, and area overhead using expensive simulations. We propose a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
56
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

3
5

Authors

Journals

citations
Cited by 52 publications
(60 citation statements)
references
References 11 publications
0
56
0
Order By: Relevance
“…A common drawback of this family of algorithms is that reduction to single-objective optimization can potentially lead to more exploitation behavior resulting in sub-optimal solutions. PAL (Zuluaga, Sergent, Krause, & Püschel, 2013), PESMO (Hernández-Lobato et al, 2016), and the concurrent works USeMO (Belakaria, Deshwal, Jayakodi, & Doppa, 2020b) and MESMO (Belakaria et al, 2019) are principled algorithms based on information theory. PAL tries to classify the input points based on the learned models into three categories: Pareto optimal, non-Pareto optimal, and uncertain.…”
Section: Single-fidelity Multi-objective Optimizationmentioning
confidence: 99%
“…A common drawback of this family of algorithms is that reduction to single-objective optimization can potentially lead to more exploitation behavior resulting in sub-optimal solutions. PAL (Zuluaga, Sergent, Krause, & Püschel, 2013), PESMO (Hernández-Lobato et al, 2016), and the concurrent works USeMO (Belakaria, Deshwal, Jayakodi, & Doppa, 2020b) and MESMO (Belakaria et al, 2019) are principled algorithms based on information theory. PAL tries to classify the input points based on the learned models into three categories: Pareto optimal, non-Pareto optimal, and uncertain.…”
Section: Single-fidelity Multi-objective Optimizationmentioning
confidence: 99%
“…This paper takes the first step to synergistically combine the advantages of latent space and generic/domain-specific kernels to automate the overall BO workflow, which is highly advantageous for scientists and engineers who are the real-users of this technology. We do not claim that LADDER is the optimal algorithm to achieve this goal, but our hope is that LADDER will inspire the BO community to explore this important research direction to develop better algorithms in the future 5 .…”
Section: Discussion and Limitationsmentioning
confidence: 99%
“…For example, in drug design application, each candidate structure is a molecule and evaluation involves performing an expensive physical lab experiment. Bayesian optimization (BO) [59,20] is an effective framework for optimizing expensive black-box functions and has shown great success in practice [63,74,21,6,5]. The key idea is to learn a cheap-to-evaluate surrogate statistical model, e.g., Gaussian process (GP), from past function evaluations and employ it to select inputs with high-utility for evaluation.…”
Section: Introductionmentioning
confidence: 99%
“…Uncertainty reduction methods like PAL (Zuluaga et al 2013), PESMO (Hernández-Lobato et al 2016) and the concurrent works USeMO (Belakaria et al 2020) and MESMO (Belakaria, Deshwal, and Doppa 2019) are principled algorithms based on information theory. In each iteration, PAL selects the candidate input for evaluation towards the goal of minimizing the size of uncertain set.…”
Section: Related Workmentioning
confidence: 99%
“…We would like to apologize for the omission of the citation of Takeno et al (2019) under subsection "Approximation 2" in Section 4.2 in Belakaria et al (2020).…”
mentioning
confidence: 99%