2017
DOI: 10.1109/tevc.2017.2693320
|View full text |Cite
|
Sign up to set email alerts
|

Expected Improvement of Penalty-Based Boundary Intersection for Expensive Multiobjective Optimization

Abstract: Computationally expensive multiobjective optimization problems are difficult to solve using solely evolutionary algorithms (EA) and require surrogate models, such as the Kriging model. To solve such problems efficiently, we propose infill criteria for appropriately selecting multiple additional sample points for updating the Kriging model. These criteria correspond to the expected improvement of the penalty-based boundary intersection (PBI) and the inverted PBI. These PBI-based measures are increasingly applie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(24 citation statements)
references
References 43 publications
0
22
0
Order By: Relevance
“…A number of selection criteria can be adopted to strike a balance between these two types of samples in individual-based strategies, also known as infill sampling criterion or acquisition function in Bayesian optimization [51]. Existing infill criteria include the expected improvement (ExI) [52], [53], probability of improvement (PoI) [54], and lower confidence bound (LCB) [55]. These infill criteria typically aggregate the predicted fitness value and the estimated uncertainty of the predicted fitness into a single-objective criterion.…”
Section: A Data Collectionmentioning
confidence: 99%
See 1 more Smart Citation
“…A number of selection criteria can be adopted to strike a balance between these two types of samples in individual-based strategies, also known as infill sampling criterion or acquisition function in Bayesian optimization [51]. Existing infill criteria include the expected improvement (ExI) [52], [53], probability of improvement (PoI) [54], and lower confidence bound (LCB) [55]. These infill criteria typically aggregate the predicted fitness value and the estimated uncertainty of the predicted fitness into a single-objective criterion.…”
Section: A Data Collectionmentioning
confidence: 99%
“…Here, the most promising candidate solutions predicted by the Kriging model are further evaluated using the low-order polynomial model, and the synthetic data generated by the polynomial model are used to update the Kriging model for the next generation. In optimization, expected improvement [53] is adopted to identify the most promising candidate solutions, and k-means clustering is applied in the decision space to choose sampling points, while fuzzy c-means clustering [69] is introduced to limit the number of data for training the Kriging model.…”
Section: Off-line Small Data-driven Optimization Of Fused Magnesiumentioning
confidence: 99%
“…Also, some research measure the uncertainty based on the variance of surrogate outputs [39], [50]. In addition, a branch of strategies called infill criteria is researched to consider both the prediction fitness and uncertainty together to combine their advantages, including expected lower confidence bound (LCB) [40], probability of improvement (PoI) [51], and expected improvement (ExI) [52]. Furthermore, based on this, multiobjective infill criteria have also shown effectiveness when minimizing fitness and uncertainty together [21].…”
Section: B Related Workmentioning
confidence: 99%
“…Except for the Kriging models, some researches employ the variance of surrogate outputs to measure the uncertainty [18], [58]. In addition, as evaluating promising and uncertain individuals have different advantages, many strategies called infill criteria are proposed and studied based on the combinations of them, such as expected lower confidence bound [19], probability of improvement [59], and expected improvement [52], [60]. Moreover, Tian et al [13] proposed a multiobjective infill criterion driven GP-assisted social learning PSO (MGP-SLPSO), where the multiobjective infill criteria are shown to be efficient when optimizing fitness and minimizing uncertainty together in solving high dimensional problems.…”
Section: B Related Workmentioning
confidence: 99%