2003
DOI: 10.1198/1061860032012
|View full text |Cite
|
Sign up to set email alerts
|

Estimating Expected Information Gains for Experimental Designs With Application to the Random Fatigue-Limit Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
146
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 134 publications
(149 citation statements)
references
References 14 publications
2
146
0
Order By: Relevance
“…Ryan () used mutual information to find static designs for efficient parameter estimation. Kim et al () used the mutual information utility to find sequential designs to efficiently estimate parameters, which was of the form: Ud(t)=normalΘYlogpφ(θ)|d(t),y(1:t)pφ(θ)|y(1:t1)py(t)|d(t),φ(θ)pφ(θ)|y(1:t1)dy(t)dθ, where y (1: t ) are the data that were observed from the first to the t ‐th trial, y ( t ) are the data that were observed at the current, t ‐th trial, using design d ( t ) , y (1: t − 1) are the data that were measured from the first to the ( t − 1)‐th trials using the designs d (1: t − 1) .…”
Section: Bayesian Utility Functions and Methods For Their Estimationmentioning
confidence: 99%
“…Ryan () used mutual information to find static designs for efficient parameter estimation. Kim et al () used the mutual information utility to find sequential designs to efficiently estimate parameters, which was of the form: Ud(t)=normalΘYlogpφ(θ)|d(t),y(1:t)pφ(θ)|y(1:t1)py(t)|d(t),φ(θ)pφ(θ)|y(1:t1)dy(t)dθ, where y (1: t ) are the data that were observed from the first to the t ‐th trial, y ( t ) are the data that were observed at the current, t ‐th trial, using design d ( t ) , y (1: t − 1) are the data that were measured from the first to the ( t − 1)‐th trials using the designs d (1: t − 1) .…”
Section: Bayesian Utility Functions and Methods For Their Estimationmentioning
confidence: 99%
“…As suggested in [6], the most used utility function ( , , ) is the Kullbach-Leibler divergence, i.e., the increase in Shannon information between the prior probability distribution and the posterior probability distribution:…”
Section: Methodsmentioning
confidence: 99%
“…The computational expense of evaluating the utility function has been a major challenge in deploying Bayesian design to determine the optimal experiment as most of the real world models are complex and cannot be analytically evaluated (Ryan, 2003;Terejanu et al, 2012). In effect, most of the reported work on Bayesian experimental design have been using linear models and in the few studies having non-linear models, approximations of the utility function or Gaussian approximations of the posterior distributions are used (Russi et al, 2008;Mosbach et al, 2012).…”
Section: Introductionmentioning
confidence: 98%