Direct replication studies follow an original experiment's methods as closely as possible. They provide information about the reliability and validity of an original study's findings. The present paper asks what comparative cognition should expect if its studies were directly replicated, and how researchers can use this information to improve the reliability of future research. Because published effect sizes are likely overestimated, comparative cognition researchers should not expect findings with p-values just below the significance level to replicate consistently. Nevertheless, there are several statistical and design features that can help researchers identify reliable research. However, researchers should not simply aim for maximum replicability when planning studies; comparative cognition faces strong replicability-validity and replicability-resource trade-offs. Next, the paper argues that it may not even be possible to perform truly direct replication studies in comparative cognition because of: 1) a lack of access to the species of interest; 2) real differences in animal behavior across sites; and 3) sample size constraints producing very uncertain statistical estimates, meaning that it will often not be possible to detect statistical differences between original and replication studies. These three reasons suggest that many claims in the comparative cognition literature are practically unfalsifiable, and this presents a challenge for cumulative science in comparative cognition. To address this challenge, comparative cognition can begin to formally assess the replicability of its findings, improve its statistical thinking and explore new infrastructures that can help to form a field that can create and combine the data necessary to understand how cognition evolves.
Research in animal behaviour often involves small and idiosyncratic samples. This can constrain the generalizability and replicability of study’s results and prevent meaningful comparisons between samples. However, there is little consensus about what constitutes a replication and what makes a strong comparison in animal research. We apply a resampling definition of replication to answer these questions in Part 1 of this article, and in Part 2, we focus on the problem of representativeness in animal research. Through three case studies, we highlight how and when representativeness may be an issue in animal behavior and cognition research, and show how the representativeness problems can be viewed through the lenses of, i) replicability, ii) generalizability and external validity, iii) pseudoreplication and, iv) theory testing. Next, we discuss when and how researchers can improve their ability to learn from small sample research through, i) increasing heterogeneity in experimental design, ii) increasing homogeneity in experimental design, and, iii) statistically modeling variation. Finally, we describe how the strongest solutions will vary depending on the goals and resources of individual research programmes and discuss some barriers towards implementing them.
Animal cognition research often involves small and idiosyncratic samples. This can constrain the generalizability and replicability of a study’s results and prevent meaningful comparisons between samples. However, there is little consensus about what makes a strong replication or comparison in animal research. We apply a resampling definition of replication to answer these questions in Part 1 of this article, and, in Part 2, we focus on the problem of representativeness in animal research. Through a case study and a simulation study, we highlight how and when representativeness may be an issue in animal behavior and cognition research and show how the representativeness problems can be viewed through the lenses of, i) replicability, ii) generalizability and external validity, iii) pseudoreplication and, iv) theory testing. Next, we discuss when and how researchers can improve their ability to learn from small sample research through, i) increasing heterogeneity in experimental design, ii) increasing homogeneity in experimental design, and, iii) statistically modeling variation. Finally, we describe how the strongest solutions will vary depending on the goals and resources of individual research programs and discuss some barriers towards implementing them.
Comparative cognitive and behavior research aims to investigate cognitive evolution by comparing performance in different species to understand how these abilities have evolved. Ideally, this requires large and diverse samples; however, these can be difficult to obtain by single labs or institutions, leading to potential reproducibility and generalization issues with small, less representative samples. To help mitigate these issues, we are establishing a multi-site collaborative Open Science approach called ManyBirds, with the aim of providing new insight into the evolution of avian cognition and behavior through large-scale comparative studies, following the lead of exemplary ManyPrimates, ManyBabies and ManyDogs projects. Here, we outline a) the replicability crisis and why we should study birds, including the origin of modern birds, avian brains and convergent evolution of cognition; b) the current state of the avian cognition field, including a ‘snapshot’ review; c) the ManyBirds project, with plans, infrastructure, limitations, implications and future directions. In sharing this process, we hope that this may be useful for other researchers in devising similar projects in other taxa, like non-avian reptiles or mammals, and to encourage further collaborations with ManyBirds and related ManyX projects. Ultimately, we hope to promote collaboration between ManyX projects to allow for wider investigation of the evolution of cognition across all animals, including potentially humans.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.