2019
DOI: 10.1002/ece3.5836
|View full text |Cite
|
Sign up to set email alerts
|

Accumulating evidence in ecology: Once is not enough

Abstract: Many published studies in ecological science are viewed as stand‐alone investigations that purport to provide new insights into how ecological systems behave based on single analyses. But it is rare for results of single studies to provide definitive results, as evidenced in current discussions of the “reproducibility crisis” in science. The key step in science is the comparison of hypothesis‐based predictions with observations, where the predictions are typically generated by hypothesis‐specific models. Repea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
65
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 65 publications
(65 citation statements)
references
References 93 publications
0
65
0
Order By: Relevance
“…Alternative metrics and alternative models should be considered, however, and long-term monitoring under the NPS Vital Signs program can support this goal. Data from monitoring programs can be used to discriminate among our hypotheses and others in an iterative process of model updating and accumulation of evidence to advance our understanding and guide resource management [81]. Finally, the spatial scale of monitoring should match the vagility of the seed predator.…”
Section: Plos Onementioning
confidence: 99%
“…Alternative metrics and alternative models should be considered, however, and long-term monitoring under the NPS Vital Signs program can support this goal. Data from monitoring programs can be used to discriminate among our hypotheses and others in an iterative process of model updating and accumulation of evidence to advance our understanding and guide resource management [81]. Finally, the spatial scale of monitoring should match the vagility of the seed predator.…”
Section: Plos Onementioning
confidence: 99%
“…Our study also illustrates that one can discriminate between a-priori hypotheses and ad-hoc hypotheses in ways that relate to the entire issue of ‘replicability’ in science (e.g. Nichols et al 2019). Here we set out to test 8 different hypotheses about how resource competition affects the relations between community assembly and the functioning of ecosystems.…”
Section: Discussionmentioning
confidence: 69%
“…Following the recommendations of TOP, journals must be open to publishing replications and negative results (Nakagawa & Parker, 2015), dispensing with statements encouraging novelty. If journals can relieve the pressure for positive/significant results, the incentives to undertake questionable practices will cease to exist (Nilsen et al, 2020), and counter publication biases (Nichols et al, 2019). Replications do not suffer the lower citations rates that journals fear (Forstmeier et al, 2017), and researchers welcome them (Fraser et al, 2019).…”
Section: Publish Negative Results and Replicationsmentioning
confidence: 99%
“…• Include spatial and temporal scale of study (i.e., study site and plot extents, study duration -be as specific as possible) • Provide clear details on how means/medians are calculated from the total sample or subsets • Report sample size and confidence/credible intervals for each statistical analysis • Report all results regardless of outcome/significance • Further suggestions can be found in Fidler et al (2018) and Gerstner et al 2017By following best practice for reporting we can maximise studies' utility and thus optimise for metaanalysis inclusion (Hillebrand & Gurevitch, 2013;Nichols et al, 2019). Ensuring that methods and statistics are fully reported boosts reach and citations (Gerstner et al, 2017).…”
Section: Open Datamentioning
confidence: 99%