2024
DOI: 10.1177/25152459241240722
|View full text |Cite
|
Sign up to set email alerts
|

Simulation-Based Power Analyses for the Smallest Effect Size of Interest: A Confidence-Interval Approach for Minimum-Effect and Equivalence Testing

Paul Riesthuis

Abstract: Effect sizes are often used in psychology because they are crucial when determining the required sample size of a study and when interpreting the implications of a result. Recently, researchers have been encouraged to contextualize their effect sizes and determine what the smallest effect size is that yields theoretical or practical implications, also known as the “smallest effect size of interest” (SESOI). Having a SESOI will allow researchers to have more specific hypotheses, such as whether their findings a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 45 publications
0
2
0
Order By: Relevance
“…The sample size determines whether an observed correlation difference yields significance at a specified level. Benchmarks for the evaluation of effect sizes neglecting the concrete context appear useless, suggesting that researchers should justify which effect size they deem non-trivial for the concrete study (Riesthuis, 2024), and subject this effect size to a priori power analyses. The R package diffcor (version 0.8.3; Blötner, 2024) provides a Monte Carlo-based power analysis function for correlation difference tests for dependent correlations (see supplemental template R file: https://osf.io/v5dte/).…”
Section: Which Difference Is Actually a Meaningful Difference?mentioning
confidence: 99%
See 1 more Smart Citation
“…The sample size determines whether an observed correlation difference yields significance at a specified level. Benchmarks for the evaluation of effect sizes neglecting the concrete context appear useless, suggesting that researchers should justify which effect size they deem non-trivial for the concrete study (Riesthuis, 2024), and subject this effect size to a priori power analyses. The R package diffcor (version 0.8.3; Blötner, 2024) provides a Monte Carlo-based power analysis function for correlation difference tests for dependent correlations (see supplemental template R file: https://osf.io/v5dte/).…”
Section: Which Difference Is Actually a Meaningful Difference?mentioning
confidence: 99%
“…Note that, depending on the intercorrelation of the ostensibly jingled/jangled constructs, this procedure requires larger sample sizes than approaches purported to ensure sufficient power to detect a bivariate correlation of pre-specified size. That is, a correlation as high as r = .10 yields significance at a smaller sample size than a correlation difference as high as Δr = .10, provided equal target power (see also Riesthuis, 2024). However, it stands to reason that robust, consequential, and far-reaching research per se requires and deserves well-powered studies.…”
Section: Which Difference Is Actually a Meaningful Difference?mentioning
confidence: 99%