2020
DOI: 10.1002/bimj.201900051
|View full text |Cite
|
Sign up to set email alerts
|

Multiple imputation methods for handling incomplete longitudinal and clustered data where the target analysis is a linear mixed effects model

Abstract: Multiple imputation (MI) is increasingly popular for handling multivariate missing data. Two general approaches are available in standard computer packages: MI based on the posterior distribution of incomplete variables under a multivariate (joint) model, and fully conditional specification (FCS), which imputes missing values using univariate conditional distributions for each incomplete variable given all the others, cycling iteratively through the univariate imputation models. In the context of longitudinal … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
28
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2
1

Relationship

5
4

Authors

Journals

citations
Cited by 27 publications
(34 citation statements)
references
References 28 publications
1
28
0
Order By: Relevance
“…However, our results are consistent with those from similar studies conducted in a two-level setting. In particular, Huque et al (2018) and Huque et al(2019) showed that the single-level JM and FCS approaches which impute the repeated measures in wide format to account for the clustering of repeated measures (labelled JM-mvni and FCS-standard in their study) performed well compared to several generalized linear mixed model (GLMM) based approaches for handling incomplete longitudinal data [47,57]. Our results are also consistent with the simulations result of , who showed that the two-level MI application in Blimp resulted in regression coefficients with negligible bias even in small samples with large proportions of missing data and minimal bias for the variance component estimates for a random intercept model [30].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, our results are consistent with those from similar studies conducted in a two-level setting. In particular, Huque et al (2018) and Huque et al(2019) showed that the single-level JM and FCS approaches which impute the repeated measures in wide format to account for the clustering of repeated measures (labelled JM-mvni and FCS-standard in their study) performed well compared to several generalized linear mixed model (GLMM) based approaches for handling incomplete longitudinal data [47,57]. Our results are also consistent with the simulations result of , who showed that the two-level MI application in Blimp resulted in regression coefficients with negligible bias even in small samples with large proportions of missing data and minimal bias for the variance component estimates for a random intercept model [30].…”
Section: Discussionmentioning
confidence: 99%
“…However, caution should be taken when generalizing these results to more complex analysis models, for example multilevel analysis models with random slopes and/or interaction terms. It would be interesting to compare the possible approaches in the context of a random slope model because it is likely that the performance of these approaches are quite different [57]. With random slopes, the single-and two-level imputation models with extensions, particularly those which use DIs, might lead to biased estimates and can often be infeasible with a large number of clusters [58].…”
Section: Discussionmentioning
confidence: 99%
“…When comparing monotone to FCS imputation with the Monte Carlo iterative procedure, we always observed better performance with FCS. We also compared the cross-sectional imputation (PMM) to multi-level multivariate imputation such as 2l.pan (FCS-LMM) or 2 l.norm (FCS-LMM-het), which was based on an assumption of homogenous or heterogeneous within-group variances respectively 18 , 29 . Our analysis showed that when the imputed data was out of the normal range, higher variation may have increased the within- and between -imputation variance but did not improve the prediction accuracy.…”
Section: Discussionmentioning
confidence: 99%
“…With incomplete covariates, bias can occur when the probability of excluding an individual with missing covariate data is related to the outcome. Regardless of bias, excluding individuals with missing covariate information will often mean discarding useful observed data, leading to imprecision [64].…”
Section: Discussionmentioning
confidence: 99%