2022
DOI: 10.31234/osf.io/g9ja2
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reporting and interpreting non-significant results in animal cognition research

Abstract: How negative results are reported and interpreted following null hypothesis significance testing is often criticised. With small sample sizes and often low number of test trials, studies in animal cognition are prone to producing non-significant p-values, irrespective of whether this is a false negative or true negative result. Thus, we assessed how negative results are reported and interpreted across published articles in animal cognition and related fields. In this study, we manually extracted and classified… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…This number rose to 60% for short reports published in 2002-2004. Among non-significant results reviewed by Fidler et al (2006), about 30% were linked to a no effect-misinterpretation, as well as about 41% of non-significant effects in animal cognition research (Farrar et al, 2022), and about 56% of clinical trials yielding non-significant results (Hemming, Javid, & Taljaard, 2022). Nieuwenhuis et al (2011) reported that in about 50% of neuroscience studies, researchers interpreted significant effects to be larger or more meaningful than non-significant effects without conducting appropriate tests for differences.…”
Section: The Prevalence Of Non-significant Results and Related Misint...mentioning
confidence: 99%
See 1 more Smart Citation
“…This number rose to 60% for short reports published in 2002-2004. Among non-significant results reviewed by Fidler et al (2006), about 30% were linked to a no effect-misinterpretation, as well as about 41% of non-significant effects in animal cognition research (Farrar et al, 2022), and about 56% of clinical trials yielding non-significant results (Hemming, Javid, & Taljaard, 2022). Nieuwenhuis et al (2011) reported that in about 50% of neuroscience studies, researchers interpreted significant effects to be larger or more meaningful than non-significant effects without conducting appropriate tests for differences.…”
Section: The Prevalence Of Non-significant Results and Related Misint...mentioning
confidence: 99%
“…A second misinterpretation is to infer that a non-significant effect meaningfully differs from another significant effect, without having tested whether the difference between the two effects is itself significant (Gelman & Stern, 2006;Greenland et al, 2016;Nieuwenhuis, Forstmann, & Wagenmakers, 2011). Although both of these misinterpretations have been found in multiple other areas (e.g., Farrar et al, 2022;Hoekstra, Finch, Kiers, & Johnson, 2006;Nieuwenhuis et al, 2011), we do not know about the frequency of these misinterpretations in educational research. More importantly, it has not been investigated whether such misinterpretations are linked to actual inferences that might lead to distorted implications for educational theory, practice, or policy.…”
Section: Introductionmentioning
confidence: 99%
“…The usefulness of such tools is not confined to the social sciences and they have also been considered across other disciplines (e.g. animal behaviour [ 15 , 16 ], cancer biology [ 17 ], and economics [ 18 ]). While the discussions surrounding openness and reproducibility have led to promising and productive changes in research culture (e.g.…”
Section: Teaching Open and Reproducible Scholarship: A Critical Revie...mentioning
confidence: 99%
“…Many practices have been developed to facilitate these goals, such as study pre-registration and Registered Reports (e.g., Lindsay et al, 2016;Nosek et al, 2015), open materials, code, and/or data (Houtkoop et al, 2018), open access publishing (Nosek & Bar-Anan, 2012), and a focus on replication studies (Open Science Collaboration, 2015;Tierney et al, 2020Tierney et al, , 2021. The usefulness of such tools is not confined to the social sciences, and it is important to note that they have also been considered across other disciplines (e.g., animal behavior, Farrar et al, 2020Farrar et al, , 2022; cancer biology, Errington et al, 2021;economics, Camerer et al, 2016). While the discussions surrounding openness and reproducibility have led to promising and productive changes in research culture (e.g., Baum et al, 2022;Munafò et al, 2022;Stewart et al, 2022), there remains progress to be made (see Devezer et al, 2021;Ledgerwood et al, 2022;Whitaker & Guest, 2020).…”
Section: The Impact Of Open and Reproducible Scholarship On Students'...mentioning
confidence: 99%