Community science image libraries offer a massive, but largely untapped, source of observational data for phenological research. The iNaturalist platform offers a particularly rich archive, containing more than 49 million verifiable, georeferenced, open access images, encompassing seven continents and over 278,000 species. A critical limitation preventing scientists from taking full advantage of this rich data source is labor. Each image must be manually inspected and categorized by phenophase, which is both time-intensive and costly. Consequently, researchers may only be able to use a subset of the total number of images available in the database. While iNaturalist has the potential to yield enough data for high-resolution and spatially extensive studies, it requires more efficient tools for phenological data extraction. A promising solution is automation of the image annotation process using deep learning. Recent innovations in deep learning have made these open-source tools accessible to a general research audience. However, it is unknown whether deep learning tools can accurately and efficiently annotate phenophases in community science images. Here, we train a convolutional neural network (CNN) to annotate images of Alliaria petiolata into distinct phenophases from iNaturalist and compare the performance of the model with non-expert human annotators. We demonstrate that researchers can successfully employ deep learning techniques to extract phenological information from community science images. A CNN classified two-stage phenology (flowering and non-flowering) with 95.9% accuracy and classified four-stage phenology (vegetative, budding, flowering, and fruiting) with 86.4% accuracy. The overall accuracy of the CNN did not differ from humans (p = 0.383), although performance varied across phenophases. We found that a primary challenge of using deep learning for image annotation was not related to the model itself, but instead in the quality of the community science images. Up to 4% of A. petiolata images in iNaturalist were taken from an improper distance, were physically manipulated, or were digitally altered, which limited both human and machine annotators in accurately classifying phenology. Thus, we provide a list of photography guidelines that could be included in community science platforms to inform community scientists in the best practices for creating images that facilitate phenological analysis.
Anurans (frogs and toads) are among the most globally threatened taxonomic groups. Successful conservation of anurans will rely on improved data on the status and changes in local populations, particularly for rare and threatened species. Automated sensors, such as acoustic recorders, have the potential to provide such data by massively increasing the spatial and temporal scale of population sampling efforts. Analyzing such data sets will require robust and efficient tools that can automatically identify the presence of a species in audio recordings. Like bats and birds, many anuran species produce distinct vocalizations that can be captured by autonomous acoustic recorders and represent excellent candidates for automated recognition. However, in contrast to birds and bats, effective automated acoustic recognition tools for anurans are not yet widely available. An effective automated callrecognition method for anurans must be robust to the challenges of real-world field data and should not require extensive labeled data sets. We devised a vocalization identification tool that classifies anuran vocalizations in audio recordings based on their periodic structure: the repeat interval-based bioacoustic identification tool (RIB-BIT). We applied RIBBIT to field recordings to study the boreal chorus frog (Pseudacris maculata) of temperate North American grasslands and the critically endangered variable harlequin frog (Atelopus varius) of tropical Central American rainforests. The tool accurately identified boreal chorus frogs, even when they vocalized in heavily overlapping choruses and identified variable harlequin frog vocalizations at a field site where it had been very rarely encountered in visual surveys. Using a few simple parameters, RIBBIT can detect any vocalization with a periodic structure, including those of many anurans, insects, birds, and mammals. We provide open-source implementations of RIBBIT in Python and R to support its use for other taxa and communities.
A core goal of the National Ecological Observatory Network (NEON) is to measure changes in biodiversity across the 30-yr horizon of the network. In contrast to NEON's extensive use of automated instruments to collect environmental data, NEON's biodiversity surveys are almost entirely conducted using traditional human-centric field methods. We believe that the combination of instrumentation for remote data collection and machine learning models to process such data represents an important opportunity for NEON to expand the scope, scale, and usability of its biodiversity data collection while potentially reducing long-term costs. In this manuscript, we first review the current status of instrument-based biodiversity surveys within the NEON project and previous research at the intersection of biodiversity, instrumentation, and machine learning at NEON sites. We then survey methods that have been developed at other locations but could potentially be employed at NEON sites in future. Finally, we expand on these ideas in five case studies that we believe suggest particularly fruitful future paths for automated biodiversity measurement at NEON sites: acoustic recorders for sound-producing taxa, camera traps for medium and large mammals, hydroacoustic and remote imagery for aquatic diversity, expanded remote and ground-based measurements for plant biodiversity, and laboratory-based imaging for physical specimens and samples in the NEON biorepository. Through its data science-literate staff and user community, NEON has a unique role to play in supporting the growth of such automated biodiversity survey methods, as well as demonstrating their ability to help answer key ecological questions that cannot be answered at the more limited spatiotemporal scales of human-driven surveys.
The AudioMoth is a popular autonomous recording unit (ARU) that is widely used to record vocalizing species in the field. Despite its growing use, there have been few quantitative tests on the performance of this recorder. Such information is needed to design effective field surveys and to appropriately analyze recordings made by this device. Here, we report the results of two tests designed to evaluate the performance characteristics of the AudioMoth recorder. First, we performed indoor and outdoor pink noise playback experiments to evaluate how different device settings, orientations, mounting conditions, and housing options affect frequency response patterns. We found little variation in acoustic performance between devices and relatively little effect of placing recorders in a plastic bag for weather protection. The AudioMoth has a mostly flat on-axis response with a boost above 3 kHz, with a generally omnidirectional response that suffers from attenuation behind the recorder, an effect that is accentuated when it is mounted on a tree. Second, we performed battery life tests under a variety of recording frequencies, gain settings, environmental temperatures, and battery types. We found that standard alkaline batteries last for an average of 189 h at room temperature using a 32 kHz sample rate, and that lithium batteries can last for twice as long at freezing temperatures compared to alkaline batteries. This information will aid researchers in both collecting and analyzing recordings generated by the AudioMoth recorder.
Ruffed grouse (Bonasa umbellus) populations are declining throughout their range, which has prompted efforts to understand drivers of the decline. Ruffed grouse monitoring efforts often rely on acoustic drumming surveys, in which a surveyor listens for the distinctive drumming sound that male ruffed grouse produce during the breeding season. Field‐based drumming surveys can fail to detect ruffed grouse when the birds drum infrequently or irregularly, making this species an excellent candidate for remote acoustic sensing with automated recording units (ARUs). An accurate automated recognition method for ruffed grouse drumming could enable effective and efficient use of ARU data for monitoring efforts; however, no such tool is currently available. Here we develop an automated method for detecting ruffed grouse drumming in audio recordings. Our detector uses a signal processing pipeline designed to recognize the accelerating pattern of drumming. We show that the automated recognition method accurately and efficiently detects drumming events in a set of labeled ARU field recordings. In a case study with 56 locations in Central Pennsylvania, we compared detections of ruffed grouse from 4 survey methods: field‐based acoustic drumming surveys, surveys conducted by humans listening to ARU recordings, and automated recognition for both a 1‐day and a 28‐day period. Field‐based surveys detected drumming at 9 of 56 locations (16%), while surveys conducted by humans listening to ARU recordings detected drumming at 8 locations (14%). Using automated recognition, the 1‐day recording period produced detections at 17 locations (30%) and the 28‐day recording period produced detections at 34 locations (61%). Our case study supports the idea that automated recognition can unlock the value of ARU datasets by temporally expanding the survey period. We provide an open‐source Python implementation of the recognition method to support further use in ruffed grouse monitoring efforts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.