In a number of scientific fields, researchers need to assess whether a variable has changed between two time points. Average-based change statistics (ABC) such as Cohen's d or Hays' ω2 evaluate the change in the distributions' center, whereas Individual-based change statistics (IBC) such as the Standardized Individual Difference or the Reliable Change Index evaluate whether each case in the sample experienced a reliable change. Through an extensive simulation study we show that, contrary to what previous studies have speculated, ABC and IBC statistics are closely related. The relation can be assumed to be linear, and was found regardless of sample size, pre-post correlation, and shape of the scores' distribution, both in single group designs and in experimental designs with a control group. We encourage other researchers to use IBC statistics to evaluate their effect sizes because: (a) they allow the identification of cases that changed reliably; (b) they facilitate the interpretation and communication of results; and (c) they provide a straightforward evaluation of the magnitude of empirical effects while avoiding the problems of arbitrary general cutoffs.
Cognitive training and brain stimulation studies have suggested that human cognition, primarily working memory and attention control processes, can be enhanced. Some authors claim that gains (i.e., post-test minus pretest scores) from such interventions are unevenly distributed among people. The magnification account (expressed by the evangelical "who has will more be given") predicts that the largest gains will be shown by the most cognitively efficient people, who will also be most effective in exploiting interventions. In contrast, the compensation account ("who has will less be given") predicts that such people already perform at ceiling, so interventions will yield the largest gains in the least cognitively efficient people. Evidence for this latter account comes from reported negative correlations between the pretest and the training/stimulation gain. In this paper, with the use of mathematical derivations and simulation methods, we show that such correlations are pure statistical artifacts caused by the widely known methodological error called "regression to the mean". Unfortunately, more advanced methods, such as alternative measures, linear models, and control groups do not guarantee correct assessment of the compensation effect either. The only correct method is to use direct modeling of correlations between latent true measures and gain. As to date no training/stimulation study has correctly used this method to provide evidence in favor of the compensation account, we must conclude that most (if not all) of the evidence should be considered inconclusive.
Studying the time-related course of psychological processes is a challenging endeavor, particularly over long developmental periods. Accelerated longitudinal designs (ALD) allow capturing such periods with a limited number of assessments in a much shorter time framework. In ALDs, participants from different cohorts are measured repeatedly but the measures provided by each participant cover only a fraction of the time range of the study. It is then assumed that the common trajectory can be studied by aggregating the information provided by the different converging cohorts. We conducted a Monte Carlo study to evaluate the practical relevance of using discrete-and continuous-time latent change score models for recovering the trajectories of a developmental process from ALD data under different sampling conditions. We focused on exponential trajectories typically found in the development of cognitive abilities from childhood to early adulthood. The results support the appropriateness of ALD designs to study such processes under various conditions of sampling. When all cohorts are drawn from the same population, both discrete-and continuous-time models are able to recover the parameters defining the underlying developmental process. However, discrete-time models yield biased estimates when time lags between observations are not constant. When cohorts are not from the same population and, thus, lack convergence, both types of models show bias in various parameters. We discuss the findings in the context of developmental methodology, encourage researchers to adopt continuous time models to analyze data from ALDs, and provide recommendations about how to implement such research designs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.