2015
DOI: 10.1016/j.asoc.2015.01.005
|View full text |Cite
|
Sign up to set email alerts
|

Efficient multi-criteria optimization on noisy machine learning problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 37 publications
(21 citation statements)
references
References 52 publications
0
19
0
Order By: Relevance
“…On the other hand, the inherent complexity of standard implementations of Bayesian methods limits the number of values of the objective functions which can be processed. The modest number of iterations seems sufficient for the solution of various applied problems, e.g., up to 60 evaluations of values of the objective functions in [24], 100 function evaluations in [27], and 200 functions evaluations in [20]. However, such a number of function values is not sufficient for the appropriate representation of the Pareto front of the considered three-objective optimization problem.…”
Section: 2 Optimization Algorithmmentioning
confidence: 99%
“…On the other hand, the inherent complexity of standard implementations of Bayesian methods limits the number of values of the objective functions which can be processed. The modest number of iterations seems sufficient for the solution of various applied problems, e.g., up to 60 evaluations of values of the objective functions in [24], 100 function evaluations in [27], and 200 functions evaluations in [20]. However, such a number of function values is not sufficient for the appropriate representation of the Pareto front of the considered three-objective optimization problem.…”
Section: 2 Optimization Algorithmmentioning
confidence: 99%
“…Given (x i , f (x i )), i = 1, · · · , t, the incumbent Pareto-front approximation becomes P max,t = Non-Dominated subset of {f (x i )|i ∈ {1, ..., t}} and its hypervolume replaces f max,t in the definition of the EI. The EHVI was first proposed in [21], and since then it has been used in Evolutionary Algorithms for airfoil optimization [4] and quantum control [22], as well as in multi-criterion generalizations of Efficient Global Optimization for applications in fluid dynamics [23], event controllers in wastewater treatment [1], efficient algorithm tuning [5], electrical component design [8], and bio-fuel power-generation [3]. In all of these applications, the bi-objective EHVI was used.…”
Section: Relevance and Related Workmentioning
confidence: 99%
“…In the context of SAMCO, the Expected Hypervolume Improvement (EHVI) is frequently used as an infill or pre-selection criterion [1][2][3][4][5][6][7][8], and is a straightforward generalization of the single-objective expected improvement (EI). It is called multiple times because, in each iteration of SAMCO, either an optimum of the EI needs to be found, as in the case of Bayesian global optimization 1 [10], or, in surrogate-assisted evolutionary algorithms, it is used to pre-assess the quality of the individuals of a larger population.…”
Section: Introductionmentioning
confidence: 99%
“…Compared to evolutionary multi-objective optimization algorithms (EMOAs), MOBGO requires only a small budget of function evaluations to achieve a similar result with respect to hypervolume indicator, and it has already been used in real-world applications to solve expensive evaluation problems [40]. According to the authors' knowledge, BGO was used for the first time in the context of airfoil optimization in [27], and then applied in the field of biogas plant controllers [16], detection in water quality management [41], machine learning algorithm configuration [23], and structural design optimization [33].…”
Section: Introductionmentioning
confidence: 99%