Autonomous recording units (ARUs) are emerging as a useful technology for the study and monitoring of animals that produce vocalizations. During summer and fall of 2013, we performed a series of experiments aimed at developing sampling protocols to count nocturnally active yellow rails (Coturnicops noveboracensis) from sound recordings. Field-based portions of the work took place in the rural municipality of Foam Lake, Saskatchewan, Canada, in an open landscape where yellow rails can be found during the breeding season; lab-based portions of the work occurred in Saskatoon, Saskatchewan, Canada. Our objectives were to 1) determine the frequency of yellow rail vocalizations to derive an empirically based sampling interval for counting individual birds; 2) assess the accuracy of yellow rail counts made from recordings; 3) determine the approximate sampling radius of the ARU for detecting yellow rails; and 4) determine the approximate audio volume ("loudness") of yellow rail calls. We developed a sonogram-based method for counting individual birds on recordings. Using field recordings of individual yellow rails, we generated recordings with known numbers of calling individuals (i.e., 1-12) and tested the accuracy of the sonogram-based counts. Regardless of experience, observers were able to determine the number of rails calling with a high level of accuracy, especially when the chorus was composed of 6 individuals. From broadcast trials employing multiple ARUs, we found the effective detection radius of calling yellow rails to be between 150 m and 175 m. Although detection radius was influenced by broadcast intensity and ambient conditions, we view this range of distance as a reasonable estimate of the effective sampling radius for the ARUs that we used, which is useful for deriving values of density estimates. Finally, we measured loudness of yellow rail calling at approximately 95 dB; this value is useful to research efforts attempting to mimic actual yellow rails (e.g., call-broadcast surveys, additional ARU experiments). A combination of the sonogramcounting method and baseline information on detection radius of the ARU provides a tool that will generate high-quality data on yellow rail occurrence, abundance, and density. Digital recorders represent a means to rapidly improve survey coverage of yellow rails throughout the species' range. Ó 2016 The Wildlife Society.
The rules governing bird song sequences vary considerably across the avian phylogeny, and modifications to these rules represent one of the many ways in which bird song varies interspecifically. Cassin's Vireo (Vireo cassinii) is one species that shows a highly structured syntax, with clearly non-random patterns of sequential organization in their songs. Here I present a description of Cassin's Vireo song sequences from the Sierra Nevada Mountains in California and employ network analysis to quantify transition patterns within the songs. Repertoire sizes varied between 44 and 60 phrase types per bird for the 13 birds analyzed here. The repertoire was subdivided into 'themes' containing between two and seven phrase types. The birds sang the phrase types in a given theme for a time before eventually introducing a new theme; in this manner the repertoire was revealed relatively slowly over time. Theme composition within a bird's repertoire did not change within or between singing bouts throughout the season. The tendency to sing in themes was corroborated by network analysis, which revealed small-world structure in the songs. Phrase types were widely shared within the population. I discuss these findings as they compare with the singing styles of other species, both closely and distantly related.
[abstFig src='/00290001/20.jpg' width='300' text='Bird songs recorded and localized by HARKBird' ] Understanding auditory scenes is important when deploying intelligent robots and systems in real-world environments. We believe that robot audition can better recognize acoustic events in the field as compared to conventional methods such as human observation or recording using single-channel microphone array. We are particularly interested in acoustic interactions among songbirds. Birds do not always vocalize at random, for example, but may instead divide a soundscape so that they avoid overlapping their songs with those of other birds. To understand such complex interaction processes, we must collect much spatiotemporal data in which multiple individuals and species are singing simultaneously. However, it is costly and difficult to annotate many or long recorded tracks manually to detect their interactions. In order to solve this problem, we are developing HARKBird, an easily-available and portable system consisting of a laptop PC with open-source software for robot audition HARK (Honda Research Institute Japan Audition for Robots with Kyoto University) together with a low-cost and commercially available microphone array. HARKBird enables us to extract the songs of multiple individuals from recordings automatically. In this paper, we introduce the current status of our project and report preliminary results of recording experiments in two different types of forests – one in the USA and the other in Japan – using this system to automatically estimate the direction of arrival of the songs of multiple birds, and separate them from the recordings. We also discuss asymmetries among species in terms of their tendency to partition temporal resources.
ABSTRACT. Autonomous recording units (ARUs) show promise for improving the spatial and temporal coverage of biodiversity monitoring programs, and for improving the resolution with which the behaviors of animals can be monitored on small spatial scales. Most ARUs, however, provide the user with little to no ability to determine the direction of an incoming sound, a shortcoming that limits the utility of ARU recordings for assessing the abundance of animals. We present a recording system constructed from two Wildlife Acoustics SM3 recording units that can estimate the direction-of-arrival (DOA) of an incoming signal with high accuracy. Field tests of this system revealed that 95% of sounds were estimated within 12° of the true DOA in the azimuth angle and 9° in the elevation angle, and that the system was largely robust to background noise and accurate to at least 30 m. We tested the ability of the system to discriminate up to four simulated birds singing simultaneously and show that the system generally performed well at this task, but, as expected, fainter and longer sounds were more likely to be overlapped and therefore undetected by the system. We propose that a microphone system that can estimate the DOA of sounds, such as the system presented here, may improve the ability of ARUs to assess abundance during biodiversity surveys by facilitating more accurate localization of sounds in three dimensions. Estimation de la direction d'arrivée des vocalisations animales pour le suivi comportemental d'animaux et l'amélioration des estimations d'abondanceRÉSUMÉ. Les instruments d'enregistrement autonomes (IEA) sont prometteurs pour améliorer la couverture spatiale et temporelle des programmes de suivi de la biodiversité et améliorer la résolution à laquelle le comportement des animaux peut être suivi sur de petites échelles spatiales. Toutefois, la plupart des IEA ne permettent pas (ou très peu) à l'utilisateur de déterminer la direction d'un son; cette lacune limite l'utilité des enregistrements issus d'IEA pour ce qui est de l'évaluation de l'abondance des animaux. Dans la présente étude, nous présentons un système d'enregistrement conçu à l'aide de deux unités d'enregistrement « Wildlife Acoustics SM3 » qui peuvent estimer la direction d'arrivée (DA) d'un son avec une grande précision. Des tests de ce système sur le terrain ont révélé que 95 % des sons ont été estimés à l'intérieur de 12° de la DA réelle dans l'angle d'azimut et de 9° dans l'angle d'élévation; le système était très fiable malgré les bruits de fond et précis au moins jusqu'à 30 m. Nous avons testé la capacité du système à discriminer jusqu'à quatre oiseaux chantant simultanément (que nous avons simulés) et avons montré que le système performe généralement bien, mais comme attendu, les sons plus faibles et plus longs étaient davantage susceptibles d'être superposés et donc non détectés par le système. Nous croyons qu'un système de microphone qui permet d'estimer la DA des sons, comme le système ici présenté, peut améliorer la capacité des IEA à évalu...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.