With rapid advances in the analysis of data from single-case research designs, the various behavior-change indices, that is, effect sizes, can be confusing. To reduce this confusion, nine effect-size indices are described and compared. Each of these indices examines data nonoverlap between phases. Similarities and differences, both conceptual and computational, are highlighted. Seven of the nine indices are applied to a sample of 200 published time series data sets, to examine their distributions. A generic meta-analytic method is presented for combining nonoverlap indices across multiple data series within complex designs.
This article describes and field-tests the improvement rate difference (IRD), a new effect size for summarizing single-case research data. Termed “risk difference” in medical research, IRD expresses the difference in successful performance between baseline and intervention phases. IRD can be calculated from visual analysis of nonoverlapping data, and is easily explained to most educators. IRD entails few data assumptions and has confidence intervals. The article applies IRD to 166 published data series, correlates results with three other effect sizes: R2, Kruskal-Wallis W, and percent of nonoverlapping data (PND), and reports interrater reliability of the IRD hand scoring. The major finding is that IRD is a promising effect size for single-case research.
Although single-case researchers are not accustomed to analyzing data statistically, standards for research and accountability from government and other funding agents are creating pressure for more objective, reliable data. In addition, “evidence-based interventions” movements in special education, clinical psychology, and school psychology imply reliable data summaries. Within special education, two heavily debated single-case research (SCR) statistical indices are “percentage of non-overlapping data” (PND) and the regression effect size, R2. This article proposes a new index—PAND, the “percentage of all non-overlapping data”—to remedy deficiencies of both PND and R2. PAND is closely related to the established effect size, Pearson's Phi , the “fourfold point correlation coefficient.” The PAND/ Phi procedure is demonstrated and applied to 75 published multiple baseline designs to answer questions about typical effect sizes, relationships with PND and R2, statistical power, and time efficiency. Confidence intervals and p values for Phi also are demonstrated. The findings are that PAND/ Phi and PND correlate equally well to R2. However, only PAND/ Phi could show adequate power for most of the multiple baseline designs sampled. The findings suggest that PAND/ Phi may meet the requirement for a useful effect size for multiple baseline and other longer designs in SCR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.