Many animal social structures are organized hierarchically, with some individuals monopolizing resources. Dominance hierarchies have received great attention from behavioural and evolutionary ecologists. There are many methods for inferring hierarchies from social interactions. Yet, there are no clear guidelines about how many observed dominance interactions (i.e. sampling effort) are necessary for inferring reliable dominance hierarchies, nor are there any established tools for quantifying their uncertainty. We simulate interactions (winners and losers) in scenarios of varying steepness (the probability that a dominant defeats a subordinate based on their difference in rank). Using these data, we (1) quantify how the number of interactions recorded and the steepness of the hierarchy affect the performance of five methods for inferring hierarchies, (2) propose an amendment that improves the performance of a popular method, and (3) suggest two easy procedures to measure uncertainty and steepness in the inferred hierarchy. We find that the ratio of interactions to individuals required to infer reliable hierarchies is surprisingly low, but depends on the steepness of the hierarchy and the method used. We show that David's score and our novel randomized Elo-rating are the best methods when hierarchies are not extremely steep, where the original Elo-rating, the I&SI and the recently described ADAGIO perform less well. In addition, we show that two simple methods can be used to estimate uncertainty at the individual and group level, and that the randomized Elo-rating repeatability provides researchers with a standardized measure valid for comparing the steepness of different hierarchies. We provide several worked examples to guide researchers interested in studying dominance hierarchies. Methods for inferring dominance hierarchies are relatively robust. We recommend that a ratio of observed interactions to individuals of at least 10 (for steep hierarchies), and ideally 20 serves as a good benchmark. Our simple procedures for estimating uncertainty in the observed data will facilitate evaluating whether sufficient data have been collected, while plotting the shape of the hierarchy will provide new insights into the social structure of the study organism.
1. Publication bias threatens the validity of quantitative evidence from meta-analyses as it results in some findings being overrepresented in meta-analytic datasets because they are published more frequently or sooner (e.g., 'positive' results). Unfortunately, methods to test for the presence of publication bias, or assess its impact on meta-analytic results, are unsuitable for datasets with high heterogeneity and non-independence, as is common in ecology and evolutionary biology.2. We first review both classic and emerging publication bias tests (e.g., funnel plots, Egger's regression, cumulative meta-analysis, fail-safe N, trim-and-fill tests, p-curve and selection models), showing that some tests cannot handle heterogeneity, and, more importantly, none of the methods can deal with non-independence. For each method we estimate current usage in ecology and evolutionary biology, based on a representative sample of 102 meta-analyses published in the last ten years.3. Then, we propose a new method using multilevel meta-regression, which can model both heterogeneity and non-independence, by extending existing regression-based methods (i.e.Egger's regression). We describe how our multilevel meta-regression can test not only publication bias, but also time-lag bias, and how it can be supplemented by residual funnel plots.4. Overall, we provide ecologists and evolutionary biologists with practical recommendations on which methods are appropriate to employ given independent and non-independent effect sizes.No method is ideal, and more simulation studies are required to understand how Type 1 and 2 error rates are impacted by complex data structures. Still, limitations of these methods do not justify ignoring publication bias in ecological and evolutionary meta-analyses.
Access to analytical code is essential for transparent and reproducible research. We review the state of code availability in ecology using a random sample of 346 nonmolecular articles published between 2015 and 2019 under mandatory or encouraged code-sharing policies. Our results call for urgent action to increase code availability: only 27% of eligible articles were accompanied by code. In contrast, data were available for 79% of eligible articles, highlighting that code availability is an important limiting factor for computational reproducibility in ecology. Although the percentage of ecological journals with mandatory or encouraged code-sharing policies has increased considerably, from 15% in 2015 to 75% in 2020, our results show that code-sharing policies are not adhered to by most authors. We hope these results will encourage journals, institutions, funding agencies, and researchers to address this alarming situation.
A recent meta-analysis concluded, 'transgenerational effects are widespread, strong and persistent'. We identify biases in the literature search, data and analyses, questioning that conclusion. Reanalyses indicate few studies actually tested transgenerational effectsmaking it challenging to disentangle condition-transfer from anticipatory parental effects, and providing little insight into the underlying mechanisms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.