2017
DOI: 10.1109/tevc.2017.2697503
|View full text |Cite
|
Sign up to set email alerts
|

Expected Improvement Matrix-Based Infill Criteria for Expensive Multiobjective Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
32
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 153 publications
(41 citation statements)
references
References 41 publications
1
32
0
Order By: Relevance
“…One such acquisition function is the expected improvement matrix (EIM) [28]. EIM is a multi-objective adaptation of EI, with the expected improvement for each candidate and each objective calculated in a matrix.…”
Section: Acquisition Functionmentioning
confidence: 99%
See 2 more Smart Citations
“…One such acquisition function is the expected improvement matrix (EIM) [28]. EIM is a multi-objective adaptation of EI, with the expected improvement for each candidate and each objective calculated in a matrix.…”
Section: Acquisition Functionmentioning
confidence: 99%
“…EIM has been previously implemented in the form of the EIM-EGO algorithm. The implementation was tested on a series of continuous test problems, displaying competitive results to the state-of-the-art multi-objective algorithms, whilst providing efficient scaling when increasing the number of optimised objectives [28].…”
Section: Acquisition Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…where f * is the true value of the objective or fitness value of the solution, f is the approximated value, and ξ is the error function which reflects the degree of "uncertainty" of the approximation of the model [56].…”
Section: Single-objective Saeasmentioning
confidence: 99%
“…Despite those mentioned difficulties in real-world applications, many benchmark test suites, which try to mimic the properties of real-world problems, have been used to examine the performance of data-driven EMO algorithms. For instance, the KNO and OKA problems was used in [11]; the Zitzler-Deb-Thiele test suite (ZDT) [12] was used in [13][14][15][16]; the Deb-Thiele-Laumanns-Zitzler test suite (DTLZ) [17] was used in [18,19]; and the MF test suite was used in [20]. It is highlighted that these benchmark test suites promote the development of data-driven evolutionary multiobjective optimization, but the abilities of these data-driven EMO algorithms in solving real-world expensive MOPs are not validated.…”
Section: Introductionmentioning
confidence: 99%