2014
DOI: 10.1177/1745691614551642
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Power Calculations

Abstract: You have just finished running an experiment. You analyze the results, and you find a significant effect. Success! But wait-how much information does your study really give you? How much should you trust your results? In this article, we show that when researchers use small samples and noisy measurements to study small effects-as they often do in psychology as well as other disciplines-a significant result is often surprisingly likely to be in the wrong direction and to greatly overestimate an effect.In this a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

10
465
0
1

Year Published

2015
2015
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 1,008 publications
(536 citation statements)
references
References 32 publications
10
465
0
1
Order By: Relevance
“…Hence, considerable resources would be saved by not performing future studies based on false premises. Increasing sample sizes is also desirable because studies with small sample sizes tend to yield inflated effect size estimates 11 , and publication and other biases may be more likely in an environment of small studies 12 . We believe that efficiency gains would far outweigh losses.…”
Section: Potential Objectionsmentioning
confidence: 99%
“…Hence, considerable resources would be saved by not performing future studies based on false premises. Increasing sample sizes is also desirable because studies with small sample sizes tend to yield inflated effect size estimates 11 , and publication and other biases may be more likely in an environment of small studies 12 . We believe that efficiency gains would far outweigh losses.…”
Section: Potential Objectionsmentioning
confidence: 99%
“…This implies that a single replication attempt, with a chance of 23% of incorrectly not detecting an existing effect of the original size, is not enough to conclude that the effect does not exist (at least when one would rely on the 5% significance threshold). The latter is amplified by the conclusion that most effects are overestimations, and hence, true to‐be‐replicated effects are smaller than those that are reported (Gelman & Carlin, 2014). Therefore, we can conclude that we could not replicate the original effect of identical size but we cannot with high confidence ascertain that the effect (i.e., more specific time references in truthful than in deceptive intentions) does not exist.…”
Section: Discussionmentioning
confidence: 84%
“…Treating the original effect size (here d  = 0.54) at face value can be misleading because most published effect sizes are overestimations of the true effect (Gelman & Carlin, 2014; Simonsohn, 2015). 2

We thank Timothy Luke for pointing us in to that direction during the reviewing process.

To avoid inflating evidence for the null hypothesis, we also calculated the informed prior Bayes factor estimation using a corrected original effect size of 75% ( d  = 0.41, BF 01  = 4.23), 50% ( d  = 0.27, BF 01  = 2.41), 25% ( d  = 0.14, BF 01  = 1.59), and 10% of the original ( d  = 0.05, BF 01  = 1.33).…”
Section: Non‐preregistered Analysesmentioning
confidence: 99%
“…Low statistical power diminishes the probability that experimental findings are true and increases the probability that effect sizes are overestimated (Type M error) or point in the wrong direction (Type S error; Button et al 2013;Gelman & Carlin, 2014;Ioannidis, 2005).…”
Section: What Are the Problems Plaguing Scientific Methodology?mentioning
confidence: 99%