Intensive longitudinal studies are becoming increasingly popular because of their potential for studying the individual dynamics of psychological processes. However, measures used in such studies are quite susceptible to measurement error due to the short lengths and therefore their psychometric properties, such as reliability, are of great concern. Most existing approaches for assessing reliability are not appropriate for the intensive longitudinal data (ILD) because of the conflation of inter- and intra-individual variations or the difficulty in handling interindividual differences. In addition, measurement models are always relegated or omitted in the ILD modeling approaches. Therefore, in this article, we introduce a two-level random dynamic measurement (2RDM) model for ILD, which takes into account measurement models for key variables of interest. Then we discuss how to derive the within-person and between-person reliabilities for items and scales in the context of the 2RDM model. A small simulation study is presented to illustrate the implementation of the 2RDM model and reliability estimation. An empirical study is then provided to demonstrate the application of the proposed approach for multidimensional scales, in which we calculated the within- and between-person reliabilities for both items and subscales of a short version of the Perceived Stress Scale and found large individual differences in the within-person reliabilities. We conclude by discussing the advantages and considerations of the proposed approach in practice.
Differential item functioning (DIF) occurs when the probability of endorsing an item differs across groups for individuals with the same latent trait level. The presence of DIF items may jeopardize the validity of an instrument; therefore, it is crucial to identify DIF items in routine operations of educational assessment. While DIF detection procedures based on item response theory (IRT) have been widely used, a majority of IRT-based DIF tests assume predefined anchor (i.e., DIF-free) items. Not only is this assumption strong, but violations to it may also lead to erroneous inferences, for example, an inflated Type I error rate. We propose a general framework to define the effect sizes of DIF without a priori knowledge of anchor items. In particular, we quantify DIF by item-specific residuals from a regression model fitted to the true item parameters in respective groups. Moreover, the null distribution of the proposed test statistic using robust estimator can be derived analytically or approximated numerically even when there is a mix of DIF and non-DIF items, which yields asymptotically justified statistical inference. The Type I error rate and the power performance of the proposed procedure are evaluated and compared with the conventional likelihood-ratio DIF tests in a Monte Carlo experiment. Our simulation study has shown promising results in controlling Type I error rate and power of detecting DIF items. Even when there is a mix of DIF and non-DIF items, the true and false alarm rate can be well controlled when a robust regression estimator is used.
R-squared measures of explained variance are easy to understand, naturally interpretable, and widely used by substantive researchers. In mediation analysis, however, despite recent advances in measures of mediation effect, few effect sizes have good statistical properties. Also, most of these measures are only available for the simplest three-variable mediation model, especially for R 2 -type measures. By decomposing the mediator into two parts (i.e., the part related to the predictor and the part unrelated to the predictor), this article proposes a systematic framework to develop new effect-size measures of explained variance in mediation analysis. The framework can be easily extended to more complex mediation models and provides more delicate R 2 measures for empirical researchers. A Monte Carlo simulation study is conducted to examine the statistical properties of the proposed R 2 effect-size measure. Results show that the new R 2 measure performs well in approximating the true value of the explained variance of the mediation effect. The use of the proposed measure is illustrated with empirical examples together with program code for its implementation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.