2018
DOI: 10.1152/jn.00765.2017
|View full text |Cite
|
Sign up to set email alerts
|

May the power be with you: are there highly powered studies in neuroscience, and how can we get more of them?

Abstract: Statistical power is essential for robust science and replicability, but a meta-analysis by Button et al. in 2013 diagnosed a "power failure" for neuroscience. In contrast, Nord et al. ( J Neurosci 37: 8051-8061, 2017) reanalyzed these data and suggested that some studies feature high power. We illustrate how publication and researcher bias might have inflated power estimates, and review recently introduced techniques that can improve analysis pipelines and increase power in neuroscience studies.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
58
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 54 publications
(58 citation statements)
references
References 12 publications
0
58
0
Order By: Relevance
“…Furthermore, small sample sizes imply higher variability around effect size estimates. In combination with publication bias (i.e., the tendency to publish mainly significant findings), reported effects thus tend to be overestimated (Algermissen and Mehler, 2018), rendering the scientific literature in psychology and neuroscience an unreliable basis for conducting power analyses for future studies (Allen and Mehler, 2019;Schäfer and Schwarz, 2019;Szucs and Ioannidis, 2017).…”
Section: Statistical Power/sensitivitymentioning
confidence: 99%
“…Furthermore, small sample sizes imply higher variability around effect size estimates. In combination with publication bias (i.e., the tendency to publish mainly significant findings), reported effects thus tend to be overestimated (Algermissen and Mehler, 2018), rendering the scientific literature in psychology and neuroscience an unreliable basis for conducting power analyses for future studies (Allen and Mehler, 2019;Schäfer and Schwarz, 2019;Szucs and Ioannidis, 2017).…”
Section: Statistical Power/sensitivitymentioning
confidence: 99%
“…To mitigate this and enhance the reliability of our findings, we used multiple resampling techniques. With the field progressing towards larger sample sizes with more power (56), we encourage future studies to keep moving in this direction. Third, the implications of our findings are limited by the original study design.…”
Section: Discussionmentioning
confidence: 99%
“…Considered as two key means to improve the reliability of CSE assessment [5], we reasoned this would prevent post-hoc arbitrary data trimming, which can lead to circular analyses and double-dipping [6]. Namely, doing so can erroneously reduce data variability and inflate the effect size, thus giving the impression of increased statistical power [7]. In order to increase the reliability of conclusions in neuroscience, it is best advised to discourage such practice [6,8].…”
Section: Dear Editormentioning
confidence: 99%
“…Similarly, for the one-sample ttest conducted on the median values of all 6 post-measurements (normalized MEP data), the achieved power was 51.2% (t (17) ¼ 2.181, p ¼ 0.0435, Cohen's dz ¼ 0.514). Given the current replication crisis in neuroscience [7,9], efforts must be promptly devoted to incorporate such considerations when interpreting results in order to increase the validity and reliability of scientific conclusions.…”
Section: Dear Editormentioning
confidence: 99%
See 1 more Smart Citation