2013
DOI: 10.1080/09602011.2013.819021
|View full text |Cite
|
Sign up to set email alerts
|

Ad-statistic for single-case designs that is equivalent to the usual between-groupsd-statistic

Abstract: We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect si… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
45
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 53 publications
(45 citation statements)
references
References 45 publications
0
45
0
Order By: Relevance
“…Methodological difficulties are inherent to the field; many researchers argue that conducting any kind of rigorous evaluation of an intervention, such as an RCT, is very challenging in adults with ABI (Turner-Stokes et al, 2005;Perdices & Tate, 2009;Lane-Brown & Tate, 2010;Holloway, 2012). The ubiquity of single case studies and the lack of consensus over the statistical analysis of SCEDs (Lane & Gast, 2014;Shadish et al, 2014) make synthesis of findings difficult.…”
Section: Discussionmentioning
confidence: 99%
“…Methodological difficulties are inherent to the field; many researchers argue that conducting any kind of rigorous evaluation of an intervention, such as an RCT, is very challenging in adults with ABI (Turner-Stokes et al, 2005;Perdices & Tate, 2009;Lane-Brown & Tate, 2010;Holloway, 2012). The ubiquity of single case studies and the lack of consensus over the statistical analysis of SCEDs (Lane & Gast, 2014;Shadish et al, 2014) make synthesis of findings difficult.…”
Section: Discussionmentioning
confidence: 99%
“…Although effect size estimates may allow for rank ordering of most to least effective treatments [55], most estimates do not provide metrics that are comparable to effect sizes derived from group designs [31]. However, one estimate that provides metrics comparable to group designs has been developed and tested by Shadish and colleagues [56,57]. They describe a standardized mean difference statistic (d) that is equivalent to the more conventional d in between-groups experiments.…”
Section: Visual Statistical and Social Validity Analysismentioning
confidence: 99%
“…They describe a standardized mean difference statistic (d) that is equivalent to the more conventional d in between-groups experiments. The d statistic can also be used to compute power based on the number of observations in each condition and the number of cases in an experiment [57]. In addition, advances in effect size estimates has led to several meta-analyses of results from SCDs [48,[58][59][60][61].…”
Section: Visual Statistical and Social Validity Analysismentioning
confidence: 99%
“…To obtain simulated errors based on an autocorrelation of .3, the autoregressive parameter matrix was set to {1 −.3}, the moving average parameter matrix was set to {1 0}, and a standard deviation of the independent portion of the error was set to 1.0 (for details on the simulation algorithm see Woodfield, 1988). The effect vector was coded to have values of 0 for all baseline observations, and values of d for all intervention phase observations, and thus d corresponds to the mean shift between intervention and baseline observations in standard deviation units, (μ B -μ A )/σ (see Busk & Serlin, 2005), where the standard deviation is based on the independent portion of the within-case error term (see, for example, Levin, Ferron, & Kratochwill, 2012) (for an alternative operationalization of d that corresponds mathematically to a conventional groups effect-size measure, see Shadish et al (2014)). The value of d was varied to examine the one-tailed Type I error probability for d = 0 and the powers for ds ranging from .5 to 5 in increments of .5.…”
Section: Methodsmentioning
confidence: 99%