2018
DOI: 10.1371/journal.pone.0200303
|View full text |Cite
|
Sign up to set email alerts
|

Questionable research practices in ecology and evolution

Abstract: We surveyed 807 researchers (494 ecologists and 313 evolutionary biologists) about their use of Questionable Research Practices (QRPs), including cherry picking statistically significant results, p hacking, and hypothesising after the results are known (HARKing). We also asked them to estimate the proportion of their colleagues that use each of these QRPs. Several of the QRPs were prevalent within the ecology and evolution research community. Across the two groups, we found 64% of surveyed researchers reported… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

12
232
2
4

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 251 publications
(250 citation statements)
references
References 41 publications
(62 reference statements)
12
232
2
4
Order By: Relevance
“…We have seen that the deception literature exhibits substantial methodological problems: samples sizes are too small to detect plausible effects, results are reported selectively, and the number of significant results appears to have been substantially inflated. This is consistent with the general finding that questionable practices are quite common in science (Agnoli et al, 2017;Fraser et al, 2018;John et al, 2012;Kerr, 1998;Martinson et al, 2005). Why are questionable practices so prevalent?…”
Section: Resultssupporting
confidence: 89%
See 2 more Smart Citations
“…We have seen that the deception literature exhibits substantial methodological problems: samples sizes are too small to detect plausible effects, results are reported selectively, and the number of significant results appears to have been substantially inflated. This is consistent with the general finding that questionable practices are quite common in science (Agnoli et al, 2017;Fraser et al, 2018;John et al, 2012;Kerr, 1998;Martinson et al, 2005). Why are questionable practices so prevalent?…”
Section: Resultssupporting
confidence: 89%
“…Despite the long existence of an extensive methodological literature identifying poor scientific practices and proposing solutions (e.g., Cohen, 1969Cohen, , 1962Meehl, 1978;Sterling, 1959), researchers frequently misunderstand and inadequately address statistical concepts fundamental to their methodologies, such as power (Bakker et al, 2016;Tversky &Kahneman, 1971) andp-values (Gigerenzer, 2004;Greenland et al, 2016). Separately but relatedly, meta-scientists have also documented the alarmingly wide prevalence of questionable research practices in psychology (and other sciences), such as selective reporting, data peeking, unplanned statistical analyses, and hypothesizing after the results are known (see, e.g., Agnoli et al, 2017;Bakker et al, 2012;Fraser et al, 2018;John, Lowenstein, & Prelec, 2012;Kerr, 1998;Simmons, Nelson, & Simonsohn, 2011). There is ample evidence that methodological flaws in psychology and other disciplines have persisted in spite of clear evidence of their occurrence and the existence of productive alternatives.…”
Section: Trouble In the Land Of Toysmentioning
confidence: 99%
See 1 more Smart Citation
“…One does not have to search hard to find plenty of published concerns about the credibility of science. These include overstated and unreliable results (Ioannidis 2005;Harris and Sumpter 2015;Henderson and Thomson 2017), conflicts of interest (McGarity and Wagner 2008;Stokstad 2012;Boone et al 2014;Oreskes et al 2015;Tollefson 2015), profound bias (Atkinson and Macdonald 2010;Bes-Rastrollo et al 2014;Cormier 2015a, 2015b), suppression of results to protect financial interests (Wadman 1997;Wise 1997), deliberate misinformation campaigns as a public relations strategy for financial or ideological aims (Baba et al 2005;McGarity and Wagner 2008;Gleick and 252 coauthors 2010;Oreskes and Conway 2011), political interference with or suppression of results from government scientists (Hutchings 1997;Stedeford 2007;Ogden 2016), self-promotion and sabotage of rivals in hypercompetitive settings (Martinson et al 2005;Edwards and Roy 2016;Ross 2017), publication bias, peer review and authorship games (Young et al 2008;Fanelli 2012;Callaway 2015;), selective reporting of data or adjusting the questions to fit the data (Fraser et al 2018), overhyped institutional press releases that are incommensurate with the actual science behind them (Cope and Allison 2009;Sumner et al 2014), dodgy journals (Bohannon 2013), and dodgy conferences (Van Noorden 2014).…”
Section: Introductionmentioning
confidence: 99%
“…This unreliability emerges for a variety of reasons, but the most common risk factors include conducting studies with small samples; selectively reporting or publishing statistically significant outcomes or outcomes from best models; conducting undisclosed exploratory data analyses; claiming to have tested a priori hypotheses that were instead generated in response to the result in question (HARKing, i.e., hypothesizing after results are known); and preferentially testing or reporting support for surprising hypotheses (those with low prior probability). These are not equally problematic in all disciplines, but we have good reason to believe that at least the first 4 are common in ecology and related fields (Parker et al 2016;Fraser et al 2018).…”
mentioning
confidence: 99%