2019
DOI: 10.1007/s10472-019-09644-8
|View full text |Cite
|
Sign up to set email alerts
|

Targeting solutions in Bayesian multi-objective optimization: sequential and batch versions

Abstract: Multi-objective optimization aims at finding trade-off solutions to conflicting objectives. These constitute the Pareto optimal set. In the context of expensive-to-evaluate functions, it is impossible and often non-informative to look for the entire set. As an end-user would typically prefer a certain part of the objective space, we modify the Bayesian multi-objective optimization algorithm which uses Gaussian Processes and works by maximizing the Expected Hypervolume Improvement, to focus the search in the pr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 26 publications
(15 citation statements)
references
References 42 publications
0
14
0
Order By: Relevance
“…Both represent a trade-off between exploration and exploitation. The multiplicative EI (mEI) [12] is interesting in the context of the O-NAUTILUS method because it also takes into account a reference point provided by the DM. However, in our tests, solutions produced by mEI did not follow the DM's preferences very well.…”
Section: Expected Asfmentioning
confidence: 99%
“…Both represent a trade-off between exploration and exploitation. The multiplicative EI (mEI) [12] is interesting in the context of the O-NAUTILUS method because it also takes into account a reference point provided by the DM. However, in our tests, solutions produced by mEI did not follow the DM's preferences very well.…”
Section: Expected Asfmentioning
confidence: 99%
“…Gaudrie [33] uses the projection (intersection in case of a continuous front) of the closest non-dominated point on the line connecting the estimated ideal and nadir points as default preference. Conditional Gaussian process simulations are performed to create possible Pareto fronts, each of which defines a sample for the ideal and the nadir point, and the estimated ideal and nadir are the medians of the samples.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Maximizing the hypervolume (HV) has been shown to produce Pareto fronts with excellent coverage [69,11,65]. However, there has been little work on EHVI in the parallel setting, and the work that has been done resorts to approximate methods [67,27,58]. Furthermore, a vast body of literature on has focused efficient EHVI computation [33,19,63], but the time complexity for computing EHVI is exponential in the number of objectives-in part due the hypervolume indicator itself incurring a time complexity that scales super-polynomially with the number of objectives [64].…”
Section: Limitations Of Current Approachesmentioning
confidence: 99%
“…Sequential greedy optimization often yields better empirical results because the optimization problem has a lower dimension: d in each step, rather than q•d in the joint problem. Most prior works in the MO setting use a sequential greedy approximation or heuristics [58,67,27,9], but impute the unobserved outcomes with the posterior mean rather than integrating over the posterior [29]. For many joint acquisition functions involving expectations, this shortcut sacrifices the theoretical error bound on the sequential greedy approximation because the exact joint acquisition function over x 1 , ..., x i , 1 ≤ i ≤ q requires integration over the joint posterior P(f (x 1 ), ..., f (x q )|D) and is not computed for i > 1.…”
Section: Related Workmentioning
confidence: 99%