Autonomous acoustic recorders are an increasingly popular method for low‐disturbance, large‐scale monitoring of sound‐producing animals, such as birds, anurans, bats, and other mammals. A specialized use of autonomous recording units (ARUs) is acoustic localization, in which a vocalizing animal is located spatially, usually by quantifying the time delay of arrival of its sound at an array of time‐synchronized microphones. To describe trends in the literature, identify considerations for field biologists who wish to use these systems, and suggest advancements that will improve the field of acoustic localization, we comprehensively review published applications of wildlife localization in terrestrial environments. We describe the wide variety of methods used to complete the five steps of acoustic localization: (1) define the research question, (2) obtain or build a time‐synchronizing microphone array, (3) deploy the array to record sounds in the field, (4) process recordings captured in the field, and (5) determine animal location using position estimation algorithms. We find eight general purposes in ecology and animal behavior for localization systems: assessing individual animals' positions or movements, localizing multiple individuals simultaneously to study their interactions, determining animals' individual identities, quantifying sound amplitude or directionality, selecting subsets of sounds for further acoustic analysis, calculating species abundance, inferring territory boundaries or habitat use, and separating animal sounds from background noise to improve species classification. We find that the labor‐intensive steps of processing recordings and estimating animal positions have not yet been automated. In the near future, we expect that increased availability of recording hardware, development of automated and open‐source localization software, and improvement of automated sound classification algorithms will broaden the use of acoustic localization. With these three advances, ecologists will be better able to embrace acoustic localization, enabling low‐disturbance, large‐scale collection of animal position data.
Acoustic recordings of soundscapes are an important category of audio data which can be useful for answering a variety of questions, and an entire discipline within ecology, dubbed "soundscape ecology," has risen to study them. Bird sound is often the focus of studies of soundscapes due to the ubiquitousness of birds in most terrestrial environments and their high vocal activity. Autonomous acoustic recorders have increased the quantity and availability of recordings of natural soundscapes while mitigating the impact of human observers on community behavior. However, such recordings are of little use without analysis of the sounds they contain. Manual analysis currently stands as the best means of processing this form of data for use in certain applications within soundscape ecology, but it is a laborious task, sometimes requiring many hours of human review to process comparatively few hours of recording. For this reason, few annotated datasets of soundscape recordings are publicly available. Further still, there are no publicly available Accepted Article This article is protected by copyright. All rights reserved strongly-labeled soundscape recordings of bird sounds which contain information on timing, frequency, and species. Therefore, we present the first dataset of strongly-labeled bird sound soundscape recordings under free use license. These data were collected in the Northeastern United States at Powdermill Nature Reserve, Rector, PA. Recordings encompass 385 minutes of dawn chorus recordings collected by autonomous acoustic recorders between the months of April through July 2018. Recordings were collected in continuous bouts on four days during the study period, and contain 48 species and 16,052 annotations. Applications of this dataset may be numerous, and include the training, validation, and testing of certain advanced machine learning models which detect or classify bird sounds. There are no copyright or propriety restrictions; please cite this paper when using materials within.
Ecologists often study biodiversity by evaluating species occupancy and the relationship between occupancy and other covariates. Occupancy models are now widely used to account for false absences in field surveys and to reduce bias in estimates of covariate relationships. Existing occupancy models take as inputs binary detection/non‐detection observations of species at each visit to each site. However, autonomous sensing devices and machine learning models are increasingly used to survey biodiversity, generating a new type of observation record (i.e. continuous‐score data) that reflects the model's confidence a species is present in each autonomously sensed file, instead of binary detection/non‐detection data. These data are not directly compatible with traditional binary occupancy modelling methods. Here, we develop a new occupancy model that models continuous scores on a visit level as a Gaussian mixture, combining a distribution of scores for files that do contain the species of interest and a distribution of scores for files that do not. The model takes as input continuous scores for each autonomously sensed and classified file, along with an optional small number of binary, manually verified detection and non‐detection annotations. We present a simulation study that shows that over a range of empirically realistic parameters, our model outperforms traditional occupancy models that are based on binary annotation alone. We also apply this new model to an empirical case study using data generated from five machine learning classifiers applied to autonomous acoustic recordings gathered in the eastern United States. Because our occupancy model generalizes allowable input data beyond binary observations, it is particularly well‐suited to the increasing volume of machine learning classified data in ecology and conservation.
Birds singing in choruses must contend with the possibility of interfering with each other's songs, but not all species will interfere with each other to the same extent due to signal partitioning. Some evidence suggests that singing birds will avoid temporal overlap only in cases where there is overlap in the frequencies their songs occupy, but the extent to which this behaviour varies according to level of frequency overlap is not yet well understood. We investigated the hypothesis that birds will increasingly avoid heterospecific temporal overlap as their frequency overlap increases by testing for a linear correlation between frequency overlap and temporal avoidance across a community of temperate eastern North American birds. We found that there was a significant correlation across the whole community and within 12 of 15 commonly occurring individual species, which supports our hypothesis and adds to the growing body of evidence that birds adjust the timing of their songs in response to frequency overlap.
A core goal of the National Ecological Observatory Network (NEON) is to measure changes in biodiversity across the 30-yr horizon of the network. In contrast to NEON's extensive use of automated instruments to collect environmental data, NEON's biodiversity surveys are almost entirely conducted using traditional human-centric field methods. We believe that the combination of instrumentation for remote data collection and machine learning models to process such data represents an important opportunity for NEON to expand the scope, scale, and usability of its biodiversity data collection while potentially reducing long-term costs. In this manuscript, we first review the current status of instrument-based biodiversity surveys within the NEON project and previous research at the intersection of biodiversity, instrumentation, and machine learning at NEON sites. We then survey methods that have been developed at other locations but could potentially be employed at NEON sites in future. Finally, we expand on these ideas in five case studies that we believe suggest particularly fruitful future paths for automated biodiversity measurement at NEON sites: acoustic recorders for sound-producing taxa, camera traps for medium and large mammals, hydroacoustic and remote imagery for aquatic diversity, expanded remote and ground-based measurements for plant biodiversity, and laboratory-based imaging for physical specimens and samples in the NEON biorepository. Through its data science-literate staff and user community, NEON has a unique role to play in supporting the growth of such automated biodiversity survey methods, as well as demonstrating their ability to help answer key ecological questions that cannot be answered at the more limited spatiotemporal scales of human-driven surveys.
Bioacoustics is a powerful and increasingly commonly used tool for terrestrial and marine biological assessments. As the scale of bioacoustic data collection has increased, techniques for processing these data have diversified. However, with analysis methods rapidly evolving and dozens of analysis software packages already available, it is challenging to identify which software, if any, meets a particular researcher’s needs. We reviewed bioacoustics software to identify packages aimed at or used by bioacoustics researchers in ecology. We compiled descriptions of the function of 65 stable or actively developed software packages used for bioacoustics analyses. Of these, 59 were free or open-source packages. In addition, we developed free, open-source Python software, OpenSoundscape, that addresses gaps in available software. OpenSoundscape simplifies the process of creating flexible, scalable deep learning algorithms for bioacoustic analysis. It can be used to train binary or multiclass convolutional neural networks with any PyTorch-implemented model structure (e.g., ResNet50, Inception v3). Researchers can easily customize its spectrogram preprocessing and data augmentation routines to improve model performance. OpenSoundscape also includes modules to work with annotated acoustic data, apply additional signal processing algorithms, perform acoustic localization, and “open the black box” of deep learning using Grad-CAM.
Using low‐coverage whole‐genome sequencing, analysis of vocalizations, and inferences from natural history, we document a first‐generation hybrid between a rose‐breasted grosbeak ( Pheucticus ludovicianus ) and a scarlet tanager ( Piranga olivacea ). These two species occur sympatrically throughout much of eastern North America, although were not previously known to interbreed. Following the field identification of a putative hybrid, we use genetic and bioacoustic data to show that a rose‐breasted grosbeak was the maternal parent and a scarlet tanager was the paternal parent of the hybrid, whose song was similar to the latter species. These two species diverged >10 million years ago, and thus it is surprising to find a hybrid formed under natural conditions in the wild. Notably, the hybrid has an exceptionally heterozygous genome, with a conservative estimate of a heterozygous base every 100 bp. The observation that this hybrid of such highly divergent parental taxa has survived until adulthood serves as another example of the capacity for hybrid birds to survive with an exceptionally divergent genomic composition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.