2021
DOI: 10.1016/j.jcp.2020.109901
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian optimization with output-weighted optimal sampling

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
48
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 36 publications
(56 citation statements)
references
References 20 publications
3
48
0
Order By: Relevance
“…We have found that the likelihood ratio accelerates convergence of the sequential algorithm in a number of examples related to uncertainty quantification and rare-event prediction. The question of whether gains of similar proportions might be achieved in Bayesian optimization (where the focus is on learning the minimum of the objective function rather than the objective function itself or its statistics) is considered in [5].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We have found that the likelihood ratio accelerates convergence of the sequential algorithm in a number of examples related to uncertainty quantification and rare-event prediction. The question of whether gains of similar proportions might be achieved in Bayesian optimization (where the focus is on learning the minimum of the objective function rather than the objective function itself or its statistics) is considered in [5].…”
Section: Discussionmentioning
confidence: 99%
“…Since the focus of this work is on experimental design, this idea will not be pursued any further. But this issue is worth mentioning because it suggests that any successful BED strategy introduced in this paper has the potential of being equally successful in the context of BO [5].…”
mentioning
confidence: 98%
“…We see that all bi-fidelity methods (BF-O and BF-Fn) achieve acceleration on the error reduction (to different extents) compared to the SF method. For the BF-Fn method, faster convergence is observed for larger n in the test range of n ∈ [1,15], but with much less benefit for n increasing from 10 to 15. The BF-O method provides the best result, in terms of the error e at cost c = 80, although the BF-O result is somewhat less accurate than the BF-F15 result for smaller c in the range of [25,50].…”
Section: Two-dimensional Stochastic Oscillatormentioning
confidence: 98%
“…EGRA [8] and many later improved variants [9,10,11]. Recently, new acquisition functions [12,13,14,15] have also been developed which focus on obtaining the overall probability density function (PDF) of the response with an emphasis on the extreme-value portion (which is our purpose in this paper instead of the exceeding probability for a single threshold).…”
Section: Introductionmentioning
confidence: 99%
“…Random forests [60] and support vector machines [61] are two other leading architectures for supervised learning. Bayesian methods are also widely used, especially for dynamical sys-tems [62]. Genetic programming has also been widely used to learn human-interpretable, yet flexible representations of data for modeling [16,[63][64][65] and control [4].…”
Section: The Architecturementioning
confidence: 99%