2022
DOI: 10.1016/j.mtcomm.2022.103440
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of conceptually different multi-objective Bayesian optimization methods for material design problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(23 citation statements)
references
References 31 publications
0
22
0
Order By: Relevance
“…• How do these findings compare to other optimization algorithms (e.g. genetic algorithms, random forest based BO [42])?…”
Section: Future Workmentioning
confidence: 99%
“…• How do these findings compare to other optimization algorithms (e.g. genetic algorithms, random forest based BO [42])?…”
Section: Future Workmentioning
confidence: 99%
“…• How do these findings compare to other optimization algorithms (e.g. random search, genetic algorithms, random forest based BO [57])?…”
Section: Future Workmentioning
confidence: 99%
“…Aside from being a performance metric to compare optimisation strategies, HV can also be directly evaluated to guide convergence of various algorithms. Hanaoka et al showed that scalarization-based MOBOs may be best suited for clear exploitation and/or preferential optimisation trajectory of objectives, whereas HV-based MOBOs are better for exploration of the entire search space [75]. Indeed, HV-based approaches empirically show a preference in proposed solutions towards the extrema of a PF [76], [77], and thus can better showcase extrapolation.…”
Section: Hypervolumementioning
confidence: 99%
“…An unavoidable issue of empirically benchmarking optimisation strategies on real-world problems is that some surrogate model must be used in-lieu of a black-box where new data is experimentally validated. Alternatively, a candidate selection problem can be used where optimisation is limited to only proposing new candidates from a pre-labelled dataset until eventually the 'pool' of samples is exhausted [65], [75], [99], [100]. The benefit of this method over surrogate-based methods is that only real data from the black-box is used, rather than data extrapolated from a model approximating its behaviour.…”
Section: B Real-world Benchmarksmentioning
confidence: 99%