1987
DOI: 10.1080/01621459.1987.10478458
|View full text |Cite
|
Sign up to set email alerts
|

The Calculation of Posterior Distributions by Data Augmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
584
0
7

Year Published

1999
1999
2018
2018

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 2,714 publications
(638 citation statements)
references
References 10 publications
1
584
0
7
Order By: Relevance
“…Consequently, no closed form can be available for the posterior p θ, δ, QjY ðÞ . This problem can be addressed via the data augmentation idea in Tanner and Wong [25]. Data augmentation technique treats the latent quantities Ω; Z fg as the hypothetical missing data and augments them with the observed data to form complete data.…”
Section: Gibbs Sampling Scheme and Posterior Analysismentioning
confidence: 99%
“…Consequently, no closed form can be available for the posterior p θ, δ, QjY ðÞ . This problem can be addressed via the data augmentation idea in Tanner and Wong [25]. Data augmentation technique treats the latent quantities Ω; Z fg as the hypothetical missing data and augments them with the observed data to form complete data.…”
Section: Gibbs Sampling Scheme and Posterior Analysismentioning
confidence: 99%
“…Under the SIR model, data augmentation techniques (Tanner and Wong, 1987) cannot be applied directly because of the difficulties in obtaining conditional expectations of the numbers of subjects in each of the three classes. To avoid such difficulties, Cauchemez and Ferguson (2008) approximated the SIR model with a diffusion process, but their approach assumed a large population size and would not be suitable for data collected in small communities or households.…”
Section: Jmssmentioning
confidence: 99%
“…In order to calculate the posterior distribution, we use the data augmentation technique (Tanner, Wong 1987) and we introduce the non-positive latent variables λ = {λ ij ; j∈{1, …, n i }: h ij = 0; I = 1, …, N}. We also define λ ij = h ij for j∈{1, …, n i }: h ij > 0.…”
Section: Posterior Distributionmentioning
confidence: 99%