2018
DOI: 10.1016/j.cma.2018.01.053
|View full text |Cite
|
Sign up to set email alerts
|

Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

Abstract: In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the opti… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
124
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 73 publications
(127 citation statements)
references
References 21 publications
2
124
0
1
Order By: Relevance
“…Much more recently, in [1], Beck et al provided a thorough error analysis of the NMC estimator and derived the optimal allocation of N and M for a given ε. In fact, they considered the situation where g cannot be computed exactly and only its discretized approximation g h with a mesh discretization parameter h is available.…”
Section: Nested Monte Carlomentioning
confidence: 99%
See 2 more Smart Citations
“…Much more recently, in [1], Beck et al provided a thorough error analysis of the NMC estimator and derived the optimal allocation of N and M for a given ε. In fact, they considered the situation where g cannot be computed exactly and only its discretized approximation g h with a mesh discretization parameter h is available.…”
Section: Nested Monte Carlomentioning
confidence: 99%
“…This means that the expected information gain U ξ measures the average amount of the reduction of the information entropy about θ by collecting data Y ξ . In (1), the inner expectation appearing in the right-most side is nothing but the Kullback-Leibler divergence between p(θ) and p(θ | Y ξ ). In the context of Bayesian experimental designs, we claim that the data Y ξ with larger value of U ξ is more informative about θ and thus the corresponding experimental design ξ is better.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In Figure 6, we compare the computational time of MLDLSC and the average time of the MLDLMC runs for a range of error tolerances. We also include an estimate of the computational time for DLMCIS method proposed in [6]. MLDLSC performs better than MLDLMC, because the polynomial approximations of MLDLSC takes advantage of the regularity of the expected information gain with respect to the random parameters.…”
Section: Numerical Resultsmentioning
confidence: 99%
“…We present two new computationally efficient methods of approximating the expected information gain based on the Kullback-Leibler divergence, in the context of Bayesian optimal experimental design. The first method we propose is a multilevel double loop Monte Carlo (MLDLMC), which improves upon the double loop Monte Carlo importance sampling (DLMCIS) method in [6].…”
Section: Resultsmentioning
confidence: 99%