2016
DOI: 10.1080/10543406.2016.1167075
|View full text |Cite
|
Sign up to set email alerts
|

An efficient monotone data augmentation algorithm for multiple imputation in a class of pattern mixture models

Abstract: We develop an efficient Markov chain Monte Carlo algorithm for the mixed-effects model for repeated measures (MMRM) and a class of pattern mixture models (PMMs) via monotone data augmentation (MDA). The proposed algorithm is particularly useful for multiple imputation in PMMs and is illustrated by the analysis of an antidepressant trial. We also describe the full data augmentation (FDA) algorithm for MMRM and PMMs and show that the marginal posterior distributions of the model parameters are the same in the MD… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
95
0

Year Published

2017
2017
2018
2018

Publication Types

Select...
5

Relationship

4
1

Authors

Journals

citations
Cited by 12 publications
(95 citation statements)
references
References 25 publications
0
95
0
Order By: Relevance
“…If the imputation contains only the normal linear models with the conditional mean normalEfalse(yijfalse|yi1,,yij1false)=k=1qxikαjk+k=1j1βjkyik, the MH sampler for y i c becomes a Gibbs sampler ( A j y ≡1) and the proposed algorithm reduced to the MDA algorithm for multivariate normal data (except that the priors may be different). For longitudinal binary or ordinal outcomes, the above algorithm is identical to that of Tang …”
Section: Mda Algorithmmentioning
confidence: 99%
See 4 more Smart Citations
“…If the imputation contains only the normal linear models with the conditional mean normalEfalse(yijfalse|yi1,,yij1false)=k=1qxikαjk+k=1j1βjkyik, the MH sampler for y i c becomes a Gibbs sampler ( A j y ≡1) and the proposed algorithm reduced to the MDA algorithm for multivariate normal data (except that the priors may be different). For longitudinal binary or ordinal outcomes, the above algorithm is identical to that of Tang …”
Section: Mda Algorithmmentioning
confidence: 99%
“…The MDA algorithm iterates between an imputation I‐step, in which the intermittent missing data are imputed given the current draw of the model parameters, and a posterior P‐step, in which the model parameters are updated given the current imputed monotone data. It tends to converge faster with smaller autocorrelation between posterior samples than a full data augmentation algorithm that imputes both the intermittent missing data and missing data after dropout during the I‐step . Schafer's algorithm was recently improved by Tang .…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations