2016
DOI: 10.1111/brv.12315
|View full text |Cite
|
Sign up to set email alerts
|

Detecting and avoiding likely false‐positive findings – a practical guide

Abstract: Recently there has been a growing concern that many published research findings do not hold up in attempts to replicate them. We argue that this problem may originate from a culture of 'you can publish if you found a significant effect'. This culture creates a systematic bias against the null hypothesis which renders meta-analyses questionable and may even lead to a situation where hypotheses become difficult to falsify. In order to pinpoint the sources of error and possible solutions, we review current scient… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
400
1
2

Year Published

2017
2017
2021
2021

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 361 publications
(429 citation statements)
references
References 116 publications
4
400
1
2
Order By: Relevance
“…We then selected fixed effects by hierarchically removing the effects whose 95% CI encompassed zero, starting with the effects with a posterior mode closer to zero. Because this stepwise method may increase the risk of type-I error (Mundry and Nunn, 2009;Forstmeier et al, 2017), we compared the 95% CI in the selected fixed structure to those obtained from the full models, but retained estimates from the selected models. Removing the genetic correlation effect after selecting the fixed effects did not change the results (results not detailed, but see Appendix S5 for the full model output).…”
Section: Implementation Of Modelsmentioning
confidence: 99%
“…We then selected fixed effects by hierarchically removing the effects whose 95% CI encompassed zero, starting with the effects with a posterior mode closer to zero. Because this stepwise method may increase the risk of type-I error (Mundry and Nunn, 2009;Forstmeier et al, 2017), we compared the 95% CI in the selected fixed structure to those obtained from the full models, but retained estimates from the selected models. Removing the genetic correlation effect after selecting the fixed effects did not change the results (results not detailed, but see Appendix S5 for the full model output).…”
Section: Implementation Of Modelsmentioning
confidence: 99%
“…As stated above, social psychological studies are frequently underpowered given the small effect sizes observed in the field. This likely inflates the rate of false-positive findings in the published literature that are later unreplicable (Forstmeier, Wagenmakers, & Parker, 2016;Ioannidis, 2005;Lakens & Evers, 2014).…”
Section: The Important (Neglect Of) Statistical Powermentioning
confidence: 99%
“…The problem that has been pointed out by Nosek, Wagenmakers, and others (Bender & Lange, 2001;Dahl et al, 2008;De Groot, 1956/2014Forstmeier et al, 2016;Nosek et al, 2017;Nosek & Lakens, 2014;Wagenmakers, 2016) is that although the precise number of tests is known in preregistered confirmatory analyses, it is not usually known in exploratory analyses. This is because exploratory analyses tend to involve a lot of tests, and only a subset of those tests is documented.…”
Section: Why Is Multiple Testing Problematic In Exploratory Analyses?mentioning
confidence: 99%
“…The replication crisis (e.g., Munafò et al, 2017) has led several researchers to conclude that it is inappropriate to interpret p values in exploratory analyses (Nosek, Ebersole, DeHaven, & Mellor, 2017;Nosek & Lakens, 2014;Forstmeier, Wagenmakers, & Parker, 2016;Wagenmakers, 2016; see also Dahl, Grotle, Benth, & Natvig, 2008;De Groot, 1956/2014. These researchers argue that "in exploratory analysis, p-values lose their meaning due to an unknown inflation of the alpha-level" (Nosek & Lakens, 2014, p. 138).…”
mentioning
confidence: 99%