2009
DOI: 10.1198/jasa.2009.0005
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Emulation and Calibration of a Stochastic Computer Model of Mitochondrial DNA Deletions in Substantia Nigra Neurons

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
93
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 72 publications
(97 citation statements)
references
References 21 publications
4
93
0
Order By: Relevance
“…Any subsequent calculation one wishes to do with GALFORM can instead be performed far more efficiently using an emulator (Heitmann et al 2009). For example, an emulator can be used within an MCMC algorithm to greatly speed up convergence (Kennedy & O'Hagan 2001;Higdon et al 2004;Henderson et al 2009). This is especially useful as for scenarios possessing moderate to high numbers of input parameters, MCMC algorithms often require vast numbers (billions, trillions or more) of model evaluations to adequately explore the input space and reach convergence: see for example the excellent discussion in Geyer (2011).…”
Section: Bayesian Emulation Methodologymentioning
confidence: 99%
“…Any subsequent calculation one wishes to do with GALFORM can instead be performed far more efficiently using an emulator (Heitmann et al 2009). For example, an emulator can be used within an MCMC algorithm to greatly speed up convergence (Kennedy & O'Hagan 2001;Higdon et al 2004;Henderson et al 2009). This is especially useful as for scenarios possessing moderate to high numbers of input parameters, MCMC algorithms often require vast numbers (billions, trillions or more) of model evaluations to adequately explore the input space and reach convergence: see for example the excellent discussion in Geyer (2011).…”
Section: Bayesian Emulation Methodologymentioning
confidence: 99%
“…Henderson et al [2009], Jandarov et al [2014, Wilkinson [2014], Meeds and Welling [2014] and Cameron et al [2015] each implement MCMC algorithms where the true likelihood is replaced by that derived from an emulator. Important differences between the methods lie in how the emulator is trained.…”
Section: Emulationmentioning
confidence: 99%
“…The training set is defined as the pair D = {X, y}. A key property of GPs is that their posterior distribution, after taking into account the training data D, is still a GP; in this case, given a prior as in expression (27) and a training set D, we obtain a posterior GP process with updated mean and covariance functions. This allows us to make predictions.…”
Section: Gp Emulation General Frameworkmentioning
confidence: 99%
“…For each function, we test the emulator predictions against the observed values by using the LOO-CV method. To select the final covariance function for the GP emulator, we use the criterion that the predicted data will lie within the 95% CI in 95% of cases [27], i.e., we compute the percentage of points out of range for each emulator and then choose the model (covariance function) with the smallest number of cases outside the 95% CI. The 95% CIs are computed as in [45], i.e., by using the intervals (m j 2s j , m j + 2s j ) where m j and s j are respectively the predictive mean (32) and square root (standard deviation) of the variance (33) for a given design point⇠ j .…”
Section: Specification Of the Gp Emulation Modelmentioning
confidence: 99%