The observation that a few species in ecological communities are exceptionally abundant, whereas most are rare, prompted the development of species abundance models. Nevertheless, despite the large literature on the commonness and rarity of species inspired by these pioneering studies, some widespread empirical patterns of species abundance resist easy explanation. Notable among these is the observation that in large assemblages there are more rare species than the log normal model predicts. Here we use a long-term (21-year) data set, from an estuarine fish community, to show how an ecological community can be separated into two components. Core species, which are persistent, abundant and biologically associated with estuarine habitats, are log normally distributed. Occasional species occur infrequently in the record, are typically low in abundance and have different habitat requirements; they follow a log series distribution. These distributions are overlaid, producing the negative skew that characterizes real data sets.
AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles (e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities, and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.
Climatic change has been implicated as the cause of abundance fluctuations in marine fish populations worldwide, but the effects on whole communities are poorly understood. We examined the effects of regional climatic change on two fish assemblages using independent datasets from inshore marine (English Channel, 1913-2002) and estuarine environments (Bristol Channel, 1981-2001). Our results show that climatic change has had dramatic effects on community composition. Each assemblage contained a subset of dominant species whose abundances were strongly linked to annual mean sea-surface temperature. Species' latitudinal ranges were not good predictors of species-level responses, however, and the same species did not show congruent trends between sites. This suggests that within a region, populations of the same species may respond differently to climatic change, possibly owing to additional local environmental determinants, interspecific ecological interactions and dispersal capacity. This will make species-level responses difficult to predict within geographically differentiated communities.
In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.
Deep reinforcement learning is the combination of reinforcement learning (RL) and deep learning. This field of research has been able to solve a wide range of complex decisionmaking tasks that were previously out of reach for a machine. Thus, deep RL opens up many new applications in domains such as healthcare, robotics, smart grids, finance, and many more. This manuscript provides an introduction to deep reinforcement learning models, algorithms and techniques. Particular focus is on the aspects related to generalization and how deep RL can be used for practical applications. We assume the reader is familiar with basic machine learning concepts. may be may be constrained (e.g., not access to an accurate simulator or limited data).Over the past few years, RL has become increasingly popular due to its success in addressing challenging sequential decision-making problems. Several of these achievements are due to the combination of RL with deep learning techniques (LeCun et al., 2015;Schmidhuber, 2015;Goodfellow et al., 2016). This combination, called deep RL, is most useful in problems with high dimensional state-space. Previous RL approaches had a difficult design issue in the choice of features (Munos and Moore, 2002;Bellemare et al., 2013). However, deep RL has been successful in complicated tasks with lower prior knowledge thanks to its ability to learn different levels of abstractions from data. For instance, a deep RL agent can successfully learn from visual perceptual inputs made up of thousands of pixels (Mnih et al., 2015). This opens up the possibility to mimic some human problem solving capabilities, even in high-dimensional space -which, only a few years ago, was difficult to conceive.Several notable works using deep RL in games have stood out for attaining super-human level in playing Atari games from the pixels (
The fat laid down as a winter reserve by 0-group sand smelt, Arherina boyeri, was found to be sizedependent. The larger, earlier-spawned fish lay down more fat prior to the onset of winter. During the winter the fish do not feed for some 100 days and rely on this fat for energy; laterspawned 0-group fish (< 59 rnms.L. in November) have insufficient fat reserves and starve to death in a normal winter. This loss of the smallest 46% of the 0-group is shown as an increase in the mean size of the 0-group over the winter period. Older sand smelt age classes have more than sufficient fat reserves for overwintering. There is thus a clear advantage in spawning early in the season, and any restriction on spawning ground availability at that time will result in overall population regulation. This conclusion supports the hypothesis that the density-dependent control on population size in the sand smelt is a limitation on the number of fish which can spawn at the optimum time.
Temporal variation in species abundances occurs in all ecological communities. Here, we explore the role that this temporal turnover plays in maintaining assemblage diversity. We investigate a three-decade time series of estuarine fishes and show that the abundances of the individual species fluctuate asynchronously around their mean levels. We then use a time-series modelling approach to examine the consequences of different patterns of turnover, by asking how the correlation between the abundance of a species in a given year and its abundance in the previous year influences the structure of the overall assemblage. Classical diversity measures that ignore species identities reveal that the observed assemblage structure will persist under all but the most extreme conditions. However, metrics that track species identities indicate a narrower set of turnover scenarios under which the predicted assemblage resembles the natural one. Our study suggests that species diversity metrics are insensitive to change and that measures that track species ranks may provide better early warning that an assemblage is being perturbed. It also highlights the need to incorporate temporal turnover in investigations of assemblage structure and function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.