If single case experimental designs are to be used to establish guidelines for evidence-based interventions in clinical and educational settings, numerical values that reflect treatment effect sizes are required. The present study compares four recently developed procedures for quantifying the magnitude of intervention effect using data with known characteristics. Monte Carlo methods were used to generate AB designs data with potential confounding variables (serial dependence, linear and curvilinear trend, and heteroscedasticity between phases) and two types of treatment effect (level and slope change). The results suggest that data features are important for choosing the appropriate procedure and, thus, inspecting the graphed data visually is a necessary initial stage. In the presence of serial dependence or a change in data variability, the Nonoverlap of All Pairs (NAP) and the Slope and Level Change (SLC) were the only techniques of the four examined that performed adequately. Introducing a data correction step in NAP renders it unaffected by linear trend, as is also the case for the Percentage of Nonoverlapping Corrected Data and SLC. The performance of these techniques indicates that professionals' judgments concerning treatment effectiveness can be readily complemented by both visual and statistical analyses. A flowchart to guide selection of techniques according to the data characteristics identified by visual inspection is provided.Key words: single-case, effect size, autocorrelation, trend 4 Single-case experimental designs (SCEDs) have been shown to be useful for evaluating intervention effectiveness in several behavioral fields (Blampied, 2000), including educational (Horner, Carr, Halle, McGee, Odom, & Wolery, 2005; and clinical psychology settings (Callahan & Barisa, 2005;Perdices & Tate, 2010). Evidence-based guidelines in relation to treatment interventions can be established by collating data from multiple SCEDs (Kratochwill & Levin, 2010). Some form of measure is needed to summarize the results of the study, not only to meet a quality criterion for N=1 research (Horner et al., 2005), but also for accountability (Chambless & Ollendick, 2001), communication between researchers, and particularly to enable conducting meta-analyses (Busse, Kratochwill, & Elliott, 1995). The latter are fundamental to evidence-based practice, given that the clinician who wants to select the correct treatment is interested in syntheses (i.e., metaanalysis) of data rather than in individual studies (Kratochwill, 2007).Although the call for evidence-based practice has emphasized the importance of summary measures (Shadish, Rindskopf, & Hedges, 2008 Single-case investigation and statistical reasoning are clearly not incompatible (White, Rusch, Kazdin, & Hartmann, 1989). In fact, over the last decades a considerable number of methods of analyzing SCED data have been 5 proposed (Allison & Gorman, 1993;Borckardt, Nash, Murphy, Moore, Shaw, & O'Neil, 2008; Center, Skiba, & Casey, 1985-1986 Ma, 2006; Manolov & ...