Species distribution models (SDMs) are widely used in ecology, biogeography and conservation biology to estimate relationships between environmental variables and species occurrence data and make predictions of how their distributions vary in space and time. During the past two decades, the field has increasingly made use of machine learning approaches for constructing and validating SDMs. Model accuracy has steadily increased as a result, but the interpretability of the fitted models, for example the relative importance of predictor variables or their causal effects on focal species, has not always kept pace. Here we draw attention to an emerging subdiscipline of artificial intelligence, explainable AI (xAI), as a toolbox for better interpreting SDMs. xAI aims at deciphering the behavior of complex statistical or machine learning models (e.g. neural networks, random forests, boosted regression trees), and can produce more transparent and understandable SDM predictions. We describe the rationale behind xAI and provide a list of tools that can be used to help ecological modelers better understand complex model behavior at different scales. As an example, we perform a reproducible SDM analysis in R on the African elephant and showcase some xAI tools such as local interpretable model‐agnostic explanation (LIME) to help interpret local‐scale behavior of the model. We conclude with what we see as the benefits and caveats of these techniques and advocate for their use to improve the interpretability of machine learning SDMs.
Species distribution models (SDMs) are widely used in ecology, biogeography, and conservation biology to understand the correlation of species occurrences with the environment and make predictions of how their distributions vary in space and time. During the past two decades, the field has increasingly made use of machine learning approaches for constructing and validating SDMs. Model accuracy has steadily increased as a result, but the interpretability of the fitted models, for example the relative importance of predictor variables for model predictions or their causal effects on focal species, has not always kept pace. Here we draw attention to an emerging subdiscipline of artificial intelligence, explainable AI (xAI), as a toolbox for better interpreting SDMs. xAI aims at deciphering the behavior of complex statistical or machine learning models (e.g., neural networks, random forests, and boosted regression trees), and can produce more transparent and understandable SDM predictions. We describe the rationale behind xAI and provide a list of tools and a reproducible example in R that demonstrates how xAI can help to improve the interpretability of machine learning used for SDMs.
Species Distribution Modeling (SDM) is an emerging field in ecology. The effects of anthropogenic climate change, habitat destruction (deforestation, pollution) and poaching are observable in ecosystems around the world (Elith & Leathwick, 2009). SDMs have been used to address those challenges with notable success in estimating the effects of climate change on species distributions (M. P. Austin & Van Niel, 2011), natural reserve planning (Guisan et al., 2013) and predicting invasive species distributions (Descombes et al., 2016).
Word embeddings are omnipresent in Natural Language Processing (NLP) tasks. The same technology which defines words by their context can also define biological species. This study showcases this new method -species embedding (species2vec). By proximity sorting of 6761594 mammal observations from the whole world (2862 different species), we are able to create a training corpus for the skip-gram model. The resulting species embeddings are tested in an environmental classification task. The classifier performance confirms the utility of those embeddings in preserving the relationships between species, and also being representative of species consortia in an environment. 12 technique, prove its statistical and ecological significance, and provide the research 13 community with a new concept for species and an associated tool to use. A similar 14 project (but a different approach, focused on environmental data) was performed by 15 Chen et. al [4], unfortunately, the code and model was not made publically available, so 16 we could not compare the methods directly.
Progress in human society's technological and social development has come at the cost of staggering complexity. Understanding this complexity's origin, evolution, and consequences is essential for human managers to lay the groundwork for reducing it in their work. Examples from the war in Afghanistan (2011-2021) and Pleistocene Park in northern Siberia support a recommended systems thinking framework and associated skills.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.