2020
DOI: 10.1080/00949655.2020.1850729
|View full text |Cite
|
Sign up to set email alerts
|

Fast matrix algebra for Bayesian model calibration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…On the other hand, Kejzlar et al (2021) used an empirical Bayes approach, wherein instead of placing a prior distribution on the unknown parameters, including θ, σ 2 ε , γ, ω, μ δ , and μ f , the method estimates these parameters directly from the data. To enhance the efficiency of the MCMC methods, Rumsey and Huerta (2021) employed the eigenvalue decomposition to approximate the inverse of the covariance matrix in the likelihood (6), that is,…”
Section: Expensive Computer Simulationmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, Kejzlar et al (2021) used an empirical Bayes approach, wherein instead of placing a prior distribution on the unknown parameters, including θ, σ 2 ε , γ, ω, μ δ , and μ f , the method estimates these parameters directly from the data. To enhance the efficiency of the MCMC methods, Rumsey and Huerta (2021) employed the eigenvalue decomposition to approximate the inverse of the covariance matrix in the likelihood (6), that is,…”
Section: Expensive Computer Simulationmentioning
confidence: 99%
“…On the other hand, Kejzlar et al (2021) used an empirical Bayes approach, wherein instead of placing a prior distribution on the unknown parameters, including bold-italicθ$$ \boldsymbol{\theta} $$, σε2$$ {\sigma}_{\varepsilon}^2 $$, bold-italicγ$$ \boldsymbol{\gamma} $$, bold-italicω$$ \boldsymbol{\omega} $$, μδ$$ {\mu}_{\delta } $$, and μf$$ {\mu}_f $$, the method estimates these parameters directly from the data. To enhance the efficiency of the MCMC methods, Rumsey and Huerta (2021) employed the eigenvalue decomposition to approximate the inverse of the covariance matrix in the likelihood (), that is, τδnormalΦδbold-italicγ+σ2boldIn1$$ {\left({\tau}_{\delta }{\Phi}_{\delta}\left(\boldsymbol{\gamma} \right)+{\sigma}^2{\mathbf{I}}_n\right)}^{-1} $$, which can be computed in nearly quadratic time. Furthermore, Kejzlar and Maiti (2023) used variational Bayes inference (Blei et al, 2017), an alternative Bayesian inference to MCMC, which has been widely used to approximate the posterior distribution through optimization as it tends to be faster and easier to scale to massive datasets.…”
Section: Posterior Inferencementioning
confidence: 99%
“…Since K varies with the unknown parameters bold-italicκ, many cubic‐time inversions may be needed. By fixing κ2 at an empirically reasonable value, substantial speedup can be obtained in applications such as Bayesian model calibration(Rumsey & Huerta, 2021), but it remains a computational bottleneck and limits deployment of the full scale GP to problems with only a moderate number of model runs.…”
Section: Introductionmentioning
confidence: 99%