With rapid advances in the analysis of data from single-case research designs, the various behavior-change indices, that is, effect sizes, can be confusing. To reduce this confusion, nine effect-size indices are described and compared. Each of these indices examines data nonoverlap between phases. Similarities and differences, both conceptual and computational, are highlighted. Seven of the nine indices are applied to a sample of 200 published time series data sets, to examine their distributions. A generic meta-analytic method is presented for combining nonoverlap indices across multiple data series within complex designs.
This article describes and field-tests the improvement rate difference (IRD), a new effect size for summarizing single-case research data. Termed “risk difference” in medical research, IRD expresses the difference in successful performance between baseline and intervention phases. IRD can be calculated from visual analysis of nonoverlapping data, and is easily explained to most educators. IRD entails few data assumptions and has confidence intervals. The article applies IRD to 166 published data series, correlates results with three other effect sizes: R2, Kruskal-Wallis W, and percent of nonoverlapping data (PND), and reports interrater reliability of the IRD hand scoring. The major finding is that IRD is a promising effect size for single-case research.
Single‐case research designs have primarily relied on visual analysis for determining treatment effects. However, current foci on evidence‐based treatment have given rise to the development of new methods. This article presents descriptions, calculations, strengths and weaknesses, and interpretative guidelines for 5 effect size indices: the percent of nonoverlapping data, the percent of data exceeding the median, improvement rate difference, nonoverlap of all pairs, and Tau‐U.
Although single-case researchers are not accustomed to analyzing data statistically, standards for research and accountability from government and other funding agents are creating pressure for more objective, reliable data. In addition, “evidence-based interventions” movements in special education, clinical psychology, and school psychology imply reliable data summaries. Within special education, two heavily debated single-case research (SCR) statistical indices are “percentage of non-overlapping data” (PND) and the regression effect size, R2. This article proposes a new index—PAND, the “percentage of all non-overlapping data”—to remedy deficiencies of both PND and R2. PAND is closely related to the established effect size, Pearson's Phi , the “fourfold point correlation coefficient.” The PAND/ Phi procedure is demonstrated and applied to 75 published multiple baseline designs to answer questions about typical effect sizes, relationships with PND and R2, statistical power, and time efficiency. Confidence intervals and p values for Phi also are demonstrated. The findings are that PAND/ Phi and PND correlate equally well to R2. However, only PAND/ Phi could show adequate power for most of the multiple baseline designs sampled. The findings suggest that PAND/ Phi may meet the requirement for a useful effect size for multiple baseline and other longer designs in SCR.
The Good Behavior Game (GBG) is a classroom management strategy that uses an interdependent group-oriented contingency to promote prosocial behavior and decrease problem behavior. This meta-analysis synthesized single-case research (SCR) on the GBG across 21 studies, representing 1,580 students in pre-kindergarten through Grade 12. The TauU effect size across 137 phase contrasts was .82 with a confidence interval 95% CI = [0.78, 0.87], indicating a substantial reduction in problem behavior and an increase in prosocial behavior for participating students. Five potential moderators were examined: emotional and behavioral disorder (EBD) risk status, reinforcement frequency, target behaviors, GBG format, and grade level. Findings suggest that the GBG is most effective in reducing disruptive and off-task behaviors, and that students with or at risk for EBD benefit most from the intervention. Implications for research and practice are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.