2019
DOI: 10.1080/10705511.2019.1604140
|View full text |Cite
|
Sign up to set email alerts
|

Predicting a Distal Outcome Variable From a Latent Growth Model: ML versus Bayesian Estimation

Abstract: Latent growth models (LGMs) with a distal outcome allow researchers to assess longer-term patterns, and to detect the need to start a (preventive) treatment or intervention in an early stage. The aim of the current simulation study is to examine the performance of an LGM with a continuous distal outcome under maximum likelihood (ML) and Bayesian estimation with default and informative priors, under varying sample sizes, effect sizes and slope variance values. We conclude that caution is needed when predicting … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 20 publications
(17 citation statements)
references
References 66 publications
1
13
0
Order By: Relevance
“…The estimation difficulties in conditions with few raters and low variability are consistent with several studies on the performance of MCMC estimation for hierarchical models (Gelman and Hill 2006;McNeish and Stapleton 2016;Polson and Scott 2012;Smid et al 2019). In the MCMC approach to estimating the ICCs, priors should be specified for the distribution of random effects.…”
Section: Introductionsupporting
confidence: 72%
See 3 more Smart Citations
“…The estimation difficulties in conditions with few raters and low variability are consistent with several studies on the performance of MCMC estimation for hierarchical models (Gelman and Hill 2006;McNeish and Stapleton 2016;Polson and Scott 2012;Smid et al 2019). In the MCMC approach to estimating the ICCs, priors should be specified for the distribution of random effects.…”
Section: Introductionsupporting
confidence: 72%
“…The choice among hyperprior distributions for random-effect variances is frequently discussed (see e.g., Gelman 2006;Gelman et al 2013;Smid et al 2019;Van Erp et al 2019). Prior and hyperprior distributions can be classified into informative or uninformative distributions, proper or improper distributions, and default, thoughtful or data-dependent distributions.…”
Section: Hyperprior Distributionsmentioning
confidence: 99%
See 2 more Smart Citations
“…Whether a sample is small depends on the complexity of the model that is estimated. One way to express the size of a sample is to look at the ratio between the number of observations and the number of unknown parameters in the model (e.g., Lee and Song, 2004;Smid et al, 2019a). A sample could be considered very small when this ratio is 2, which means there are just two observations for each unknown parameter.…”
Section: What Is a Small Sample?mentioning
confidence: 99%