Joint species distribution models (JSDMs) account for biotic interactions and missing environmental predictors in correlative species distribution models. Several different JSDMs have been proposed in the literature, but the use of different or conflicting nomenclature and statistical notation potentially obscures similarities and differences among them. Furthermore, new JSDM implementations have been illustrated with different case studies, preventing direct comparisons of computational and statistical performance. We aim to resolve these outstanding issues by (a) highlighting similarities among seven presence–absence JSDMs using a clearly defined, singular notation; and (b) evaluating the computational and statistical performance of each JSDM using six datasets that vary widely in numbers of sites, species, and environmental covariates considered. Our singular notation shows that many of the JSDMs are very similar, and in turn parameter estimates of different JSDMs are moderate to strongly, positively correlated. In contrast, the different JSDMs clearly differ in computational efficiency and memory limitations. Our framework will allow ecologists to make educated decisions about the JSDM that best suits their objective, and enable wider uptake of JSDM methods among the ecological community.
Joint species distribution models (JSDMs) simultaneously model the distributions of multiple species, while accounting for residual co‐occurrence patterns. Despite increasing adoption of JSDMs in the literature, the question of how to define and evaluate JSDM predictions has only begun to be explored. We define four different JSDM prediction types that correspond to different aspects of species distribution and community assemblage processes. Marginal predictions are environment‐only predictions akin to predictions from single‐species models; joint predictions simultaneously predict entire community assemblages; and conditional marginal and conditional joint predictions are made at the species or assemblage level, conditional on the known occurrence state of one or more species at a site. We define five different classes of metrics that can be used to evaluate these types of predictions: threshold‐dependent, threshold‐independent, community dissimilarity, species richness and likelihood metrics. We illustrate different prediction types and evaluation metrics using a case study in which we fit a JSDM to a frog occurrence dataset collected in Melbourne, Australia. Joint species distribution models present opportunities to investigate the facets of species distribution and community assemblage processes that are not possible to explore with single‐species models. We show that there are a variety of different metrics available to evaluate JSDM predictions, and that choice of prediction type and evaluation metric should closely match the questions being investigated.
Unreliable research programmes waste funds, time, and even the lives of the organisms we seek to help and understand. Reducing this waste and increasing the value of scientific evidence require changing the actions of both individual researchers and the institutions they depend on for employment and promotion. While ecologists and evolutionary biologists have somewhat improved research transparency over the past decade (e.g. more data sharing), major obstacles remain. In this commentary, we lift our gaze to the horizon to imagine how researchers and institutions can clear the path towards more credible and effective research programmes.
Aim: After environmental disasters, species with large population losses may need urgent protection to prevent extinction and support recovery. Following the 2019-2020 Australian megafires, we estimated population losses and recovery in fire-affected fauna, to inform conservation status assessments and management.Location: Temperate and subtropical Australia. Time period: 2019-2030 and beyond.Major taxa: Australian terrestrial and freshwater vertebrates; one invertebrate group. Methods:From > 1,050 fire-affected taxa, we selected 173 whose distributions substantially overlapped the fire extent. We estimated the proportion of each taxon's distribution affected by fires, using fire severity and aquatic impact mapping, and new distribution mapping. Using expert elicitation informed by evidence of responses to previous wildfires, we estimated local population responses to fires of varying severity. We combined the spatial and elicitation data to estimate overall population loss and recovery trajectories, and thus indicate potential eligibility for listing as threatened, or uplisting, under Australian legislation. Results:We estimate that the 2019-2020 Australian megafires caused, or contributed to, population declines that make 70-82 taxa eligible for listing as threatened;
Contributor Roles Taxonomy (CRediT) has recently changed how author contributions are acknowledged. To extend and complement CRediT, we propose MeRIT, a new way of writing the Methods section using the author’s initials to further clarify contributor roles for reproducibility and replicability.
As replications of individual studies are resource intensive, techniques for predicting the replicability are required. We introduce the repliCATS (Collaborative Assessments for Trustworthy Science) process, a new method for eliciting expert predictions about the replicability of research. This process is a structured expert elicitation approach based on a modified Delphi technique applied to the evaluation of research claims in social and behavioural sciences. The utility of processes to predict replicability is their capacity to test scientific claims without the costs of full replication. Experimental data supports the validity of this process, with a validation study producing a classification accuracy of 84% and an Area Under the Curve of 0.94, meeting or exceeding the accuracy of other techniques used to predict replicability. The repliCATS process provides other benefits. It is highly scalable, able to be deployed for both rapid assessment of small numbers of claims, and assessment of high volumes of claims over an extended period through an online elicitation platform, having been used to assess 3000 research claims over an 18 month period. It is available to be implemented in a range of ways and we describe one such implementation. An important advantage of the repliCATS process is that it collects qualitative data that has the potential to provide insight in understanding the limits of generalizability of scientific claims. The primary limitation of the repliCATS process is its reliance on human-derived predictions with consequent costs in terms of participant fatigue although careful design can minimise these costs. The repliCATS process has potential applications in alternative peer review and in the allocation of effort for replication studies.
This paper explores judgements about the replicability of social and behavioural sciences research, and what drives those judgements. Using a mixed methods approach, it draws on qualitative and quantitative data elicited using a structured iterative approach for eliciting judgements from groups, called the IDEA protocol (‘Investigate’, ‘Discuss’, ‘Estimate’ and ‘Aggregate’). Five groups of five people separately assessed the replicability of 25 ‘known-outcome’ claims. That is, social and behavioural science claims that have already been subject to at least one replication study. Specifically, participants assessed the probability that each of the 25 research claims will replicate (i.e. a replication study would find a statistically significant result in the same direction as the original study). In addition to their quantitative judgements, participants also outlined the reasoning behind their judgements. To start, we quantitatively analysed some possible correlates of predictive accuracy, such as self-rated understanding and expertise in assessing each claim, and updating of judgements after feedback and discussion. Then we qualitatively analysed the reasoning data (i.e., the comments and justifications people provided for their judgements) to explore the cues and heuristics used, and features of group discussion that accompanied more and less accurate judgements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.