2020
DOI: 10.1002/etc.4847
|View full text |Cite
|
Sign up to set email alerts
|

The Minimum Detectable Difference (MDD) Concept for Establishing Trust in Nonsignificant Results: A Critical Review

Abstract: Current regulatory guidelines for pesticide risk assessment recommend that nonsignificant results should be complemented by the minimum detectable difference (MDD), a statistical indicator that is used to decide whether the experiment could have detected biologically relevant effects. We review the statistical theory of the MDD and perform simulations to understand its properties and error rates. Most importantly, we compare the skill of the MDD in distinguishing between true and false negatives (i.e., type II… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 58 publications
0
15
0
Order By: Relevance
“…We ensured a large sample by choosing an appropriately large school district; however, our sample size was greatly impacted by uncontrollable factors such as how many parents gave permission, how many students moved away from the district, and absenteeism on survey days. After conducting analyses, we conducted a minimum detectable effect analysis (Juras et al, 2016; Mair et al, 2020). These analyses indicated that the study was appropriately powered for the three main outcomes presented in the current analyses (overall peer violence perpetration, overall victimization, and sexual victimization).…”
Section: Methodsmentioning
confidence: 99%
“…We ensured a large sample by choosing an appropriately large school district; however, our sample size was greatly impacted by uncontrollable factors such as how many parents gave permission, how many students moved away from the district, and absenteeism on survey days. After conducting analyses, we conducted a minimum detectable effect analysis (Juras et al, 2016; Mair et al, 2020). These analyses indicated that the study was appropriately powered for the three main outcomes presented in the current analyses (overall peer violence perpetration, overall victimization, and sexual victimization).…”
Section: Methodsmentioning
confidence: 99%
“…Therefore, and in addition to justified questions about the appropriate estimate of the effect size as well as assumptions about the correlations between repeated measures, the results of our initial analyses of sample sizes and power have to be interpreted cautiously. Calculating confidence intervals around the estimated effect, η 2 p , can help determine whether a nonsignificant result indicates the true absence of an effect rather than a lack of power [56], as the true value of the population effect lies within this interval [57]. The lower bound for most of the experiments includes values of zero and therefore no effect at all (Table 4, [58,59]), which is in line with null hypothesis significance testing of the ANOVAs.…”
Section: General Discussion and Conclusionmentioning
confidence: 87%
“…A statistical analysis was conducted to determine the minimal detectable difference (MDD) in the QTc interval and calculated as the half‐width of the 95% CI. The MDD value represents the smallest difference between group means that would be detectable in the t ‐test at a probability level <0.05 (e.g., the border between significance and nonsignificance) 23 . The inclusion of an in‐study statistical metric is a best practice consideration for new cardiovascular telemetry studies to quantify QTc assay sensitivity and for potential comparison to prior studies 18,24 …”
Section: Methodsmentioning
confidence: 99%