AIAA Scitech 2020 Forum 2020
DOI: 10.2514/6.2020-0678
|View full text |Cite
|
Sign up to set email alerts
|

Remarks for Scaling Up a General Gaussian Process to Model Large Dataset with Sub-models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 21 publications
0
9
0
Order By: Relevance
“…as any conditional of a Gaussian distribution is also Gaussian 2 . Denote µ(x), σ 2 (x), and θ as the posterior mean, the posterior variance, and the hyper-parameters of the objective GP model, respectively.…”
Section: Classical Gaussian Process and Bayesian Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…as any conditional of a Gaussian distribution is also Gaussian 2 . Denote µ(x), σ 2 (x), and θ as the posterior mean, the posterior variance, and the hyper-parameters of the objective GP model, respectively.…”
Section: Classical Gaussian Process and Bayesian Optimizationmentioning
confidence: 99%
“…Bostanabad et al [1] proposed a globally approximate local GP (GAGP) and demonstrated up to ∼90k data points. Zhang et al [2] proposed a locally weighted scheme, similarly to van Stein et al [3,4], where the weights are derived by minimizing the weighted posterior variance, and demonstrate to 100k data points. Tran et al [5,6] proposed a local GP approach with Wasserstein distance to model the potential energy surface and mixedinteger BO problems [7].…”
Section: Introductionmentioning
confidence: 99%
“…Once the surrogate is constructed, Sobol decomposition provides the sensitivity indices [16]. We use the Bayesian Hybrid Modeling (GEBHM) approach [50,51], a probabilistic machine learning method that enables SA, calibration, multifidelity modeling and uncertainty quantification.…”
Section: Sensitivity Analysis Of the Model Parametersmentioning
confidence: 99%
“…22, the size of the data may grow exponentially demanding large computational budged. In this case a scaleable framework for the GP regression of large dataset can be exploited to efficiently address the computational cost [20].…”
Section: Surrogate Modelingmentioning
confidence: 99%