Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2018
DOI: 10.1016/j.jml.2018.07.004
|View full text |Cite|
|
Sign up to set email alerts
|

The statistical significance filter leads to overoptimistic expectations of replicability

Abstract: We show that publishing results using the statistical significance filter-publishing only when the p-value is less than 0.05-leads to a vicious cycle of overoptimistic expectation of the replicability of results. First, we show analytically that when true statistical power is relatively low, computing power based on statistically significant results will lead to overestimates of power. Then, we present a case study using 10 experimental comparisons drawn from a recently published metaanalysis in psycholinguist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
175
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 184 publications
(198 citation statements)
references
References 73 publications
(20 reference statements)
1
175
1
Order By: Relevance
“…As discussed in J€ ager et al (2017) and Vasishth et al (2018), low power and publication bias could be important factors that weaken the empirical claims. As discussed in J€ ager et al (2017) and Vasishth et al (2018), low power and publication bias could be important factors that weaken the empirical claims.…”
Section: Discussionmentioning
confidence: 96%
See 3 more Smart Citations
“…As discussed in J€ ager et al (2017) and Vasishth et al (2018), low power and publication bias could be important factors that weaken the empirical claims. As discussed in J€ ager et al (2017) and Vasishth et al (2018), low power and publication bias could be important factors that weaken the empirical claims.…”
Section: Discussionmentioning
confidence: 96%
“…Some caution is also needed as regards the interpretation of the available data. As discussed in J€ ager et al (2017) and Vasishth et al (2018), low power and publication bias could be important factors that weaken the empirical claims. Appendix B in J€ ager et al (2017) shows that power for many of the published studies on interference could be as low as 10%-20%.…”
Section: Discussionmentioning
confidence: 96%
See 2 more Smart Citations
“…Hence a study which really attains 80% power is not likely to suffer in its quest for publication, motivating c 95 = 0.8. However, it is well commented upon (Bland 2009, Vasishth & Gelman 2017) that often a-priori sample size claims are exaggerated through various mechanisms, meaning that a study with less than 80% power might be advertised as having 80% power. This is the basis for setting c 50 = 0.5, i.e., truly possessing only 50% power does substantially reduce, but not eliminate, the chance of publication.…”
mentioning
confidence: 99%