2019
DOI: 10.1177/0193841x19870878
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Design of Cluster- and Multisite-Randomized Studies Using Fallible Outcome Measures

Abstract: Background: Evaluation studies frequently draw on fallible outcomes that contain significant measurement error. Ignoring outcome measurement error in the planning stages can undermine the sufficiency and efficiency of an otherwise well-designed study and can further constrain the evidence studies bring to bear on the effectiveness of programs. Objectives: We develop simple formulas to adjust statistical power, minimum detectable effect (MDE), and optimal sample allocation formulas for two-level cluster- and mu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 24 publications
(44 reference statements)
0
2
0
Order By: Relevance
“…One limitation of the current study is that we assume the cost data are measured with validity and reliability through the ingredients method, while some ingredients might be unobserved, unintended, hard to identify, or subject to measurement error. Prior studies (e.g., Cox & Kelcey, 2019) have discussed the impact of the measurement error of the effectiveness measures (e.g., test scores) on power and MDES; however, to date there have not been studies to address the validity of the ingredient methods or the reliability (or measurement error) of the cost estimates–based ingredient methods. When cost estimates are subject to measurement error, the uncertainty (e.g., SE s) of the estimated incremental cost (i.e., Δ C ) will increase, and thus the power to detect the cost-effectiveness of one particular intervention would decrease and the MDES would increase.…”
Section: Resultsmentioning
confidence: 99%
“…One limitation of the current study is that we assume the cost data are measured with validity and reliability through the ingredients method, while some ingredients might be unobserved, unintended, hard to identify, or subject to measurement error. Prior studies (e.g., Cox & Kelcey, 2019) have discussed the impact of the measurement error of the effectiveness measures (e.g., test scores) on power and MDES; however, to date there have not been studies to address the validity of the ingredient methods or the reliability (or measurement error) of the cost estimates–based ingredient methods. When cost estimates are subject to measurement error, the uncertainty (e.g., SE s) of the estimated incremental cost (i.e., Δ C ) will increase, and thus the power to detect the cost-effectiveness of one particular intervention would decrease and the MDES would increase.…”
Section: Resultsmentioning
confidence: 99%
“…Many more groups need to be randomly assigned to treatment and control conditions to have adequate power, also known as cluster-randomization in the literature. There are many scholars spearheading this line of research, and guide practitioners to design rigorous clusterrandomized trials (e.g., Bloom, 1995Bloom, , 2006Bloom et al, 1999;Bulus & Dong, 2021;Bulus & Şahin, 2019;Cox & Kelcey, 2019a, 2019bDong, Kelcey, & Spybrook, 2017;Dong & Maynard, 2013;Dong, Kelcey, & Spybrook, 2017;Kelcey, Dong, Spybrook, & Cox, 2017;Kelcey, Dong, Spybrook, & Shen, 2017;Konstantopoulos, 2009Konstantopoulos, , 2011Konstantopoulos, , 2013Raudenbush, 1997;Raudenbush & Liu, 2000;and many others). There are publicly available software tools that implement results from these studies to assist with the design of cluster-randomized trials (e.g., PowerUp!, Dong & Maynard, 2013;PowerUpR, Bulus et al, 2019;OD+, Spybrook et al, 2011).…”
Section: (Block) Randomize and Adjust For Baseline Differencesmentioning
confidence: 99%