Results from single-case studies are being synthesized using three-level models in which repeated observations are nested in participants, which in turn are nested in studies. We examined the performance of these models under conditions in which the errors associated with the repeated observations (the Level-1 errors) were assumed to be first-order autoregressive. Monte Carlo methods were used to examine conditions in which the first-order autoregressive assumption was accurate, conditions in which it represented an overspecification because the errors were actually independent, and conditions in which it represented a misspecification because the errors were generated on the basis of a moving-average model. Conditions also varied the series lengths, the numbers of participants per study, the numbers of studies per meta-analysis, the variances between the participants within studies, and the variances between studies. Fixed effects (e.g., the average treatment effect for the intervention and the average treatment effect for the trend) tended to be unbiased, and confidence intervals for the fixed effects tended to be accurate even when the error covariance model was overspecified or misspecified. The variance components, particularly at Levels 2 and 3, showed substantial bias.
Multilevel models (MLM) have been used as a method for analyzing multiple-baseline single-case data. However, some concerns can be raised because the models that have been used assume that the Level-1 error covariance matrix is the same for all participants. The purpose of this study was to extend the application of MLM of singlecase data in order to accommodate across-participant variation in the Level-1 residual variance and autocorrelation. This more general model was then used in the analysis of single-case data sets to illustrate the method, to estimate the degree to which the autocorrelation and residual variances differed across participants, and to examine whether inferences about treatment effects were sensitive to whether or not the Level-1 error covariance matrix was allowed to vary across participants. The results from the analyses of five published studies showed that when the Level-1 error covariance matrix was allowed to vary across participants, some relatively large differences in autocorrelation estimates and error variance estimates emerged. The changes in modeling the variance structure did not change the conclusions about which fixed effects were statistically significant in most of the studies, but there was one exception. The fit indices did not consistently support selecting either the more complex covariance structure, which allowed the covariance parameters to vary across participants, or the simpler covariance structure. Given the uncertainty in model specification that may arise when modeling single-case data, researchers should consider conducting sensitivity analyses to examine the degree to which their conclusions are sensitive to modeling choices.
The use of multilevel models as a method for synthesising single-case experimental design results is receiving increased consideration. In this article we discuss the potential advantages and limitations of the multilevel modelling approach. We present a basic two-level model where observations are nested within cases, and then discuss extensions of the basic model to accommodate trends, moderators of the intervention effect, non-continuous outcomes, heterogeneity, autocorrelation, the nesting of cases within studies, and more complex single-case design types. We then consider methods for standardising the effect estimates and alternative approaches to estimating the models. These modelling and analysis options are followed by an illustrative example.
In special education, multilevel models of single-case research have been used as a method of estimating treatment effects over time and across individuals. Although multilevel models can accurately summarize the effect, it is known that if the model is misspecified, inferences about the effects can be biased. Concern with the potential for model misspecification motivates our method for evaluating multilevel models of single-case data. This method is based on the visual analysis of graphs that have the model-implied individual trajectories superimposed on plots of the raw data. Through the reanalysis of a published study, we show how this visual analysis approach can identify model misspecifications and motivate the consideration of alternative model specifications that lead to better fit.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.