Building on earlier work identifying Biologically Important Areas (BIAs) for cetaceans in U.S. waters (BIA I), we describe the methodology and structured expert elicitation principles used in the “BIA II” effort to update existing BIAs, identify and delineate new BIAs, and score BIAs for 25 cetacean species, stocks, or populations in seven U.S. regions. BIAs represent areas and times in which cetaceans are known to concentrate for activities related to reproduction, feeding, and migration, as well as known ranges of small and resident populations. In this BIA II effort, regional cetacean experts identified the full extent of any BIAs in or adjacent to U.S. waters, based on scientific research, Indigenous knowledge, local knowledge, and community science. The new BIA scoring and labeling system improves the utility and interpretability of the BIAs by designating an overall Importance Score that considers both (1) the intensity and characteristics underlying an area’s identification as a BIA; and (2) the quantity, quality, and type of information, and associated uncertainties upon which the BIA delineation and scoring depend. Each BIA is also scored for boundary uncertainty and spatiotemporal variability (dynamic, ephemeral, or static). BIAs are region-, species-, and time-specific, and may be hierarchically structured where detailed information is available to support different scores across a BIA. BIAs are compilations of the best available science and have no inherent regulatory authority. BIAs may be used by international, federal, state, local, or Tribal entities and the public to support planning and marine mammal impact assessments, and to inform the development of conservation and mitigation measures, where appropriate under existing authorities. Information provided online for each BIA includes: (1) a BIA map; (2) BIA scores and label; (3) a metadata table detailing the data, assumptions, and logic used to delineate, score, and label the BIA; and (4) a list of references used in the assessment. Regional manuscripts present maps and scores for the BIAs, by region, and narratives summarizing the rationale and information upon which several representative BIAs are based. We conclude with a comparison of BIA II to similar international efforts and recommendations for improving future BIA assessments.
Monitoring ecological changes in marine ecosystems is expensive and time-consuming. Passive acoustic methods provide continuous monitoring of soniferous species, are relatively inexpensive, and can be integrated into a larger network to provide enhanced spatial and temporal coverage of ecological events. We demonstrate how these methods can be used to detect changes in fish populations in response to a Karenia brevis red tide harmful algal bloom by examining sound spectrum levels recorded by two land-based passive acoustic listening stations (PALS) deployed in Sarasota Bay, Florida, before and during a red tide event. Significant and temporally persistent decreases in sound spectrum levels were recorded in real time at both PALS in four frequency bands spanning 0.172–20 kHz after K. brevis cells were opportunistically sampled near the stations. The decrease in sound spectrum levels and increase in K. brevis cell concentrations also coincided with decreased catch per unit effort (CPUE) and species density per unit effort (SDPUE) data for non-clupeid fish and soniferous fish species, as well as increased reports of marine mammal mortalities in the region. These findings demonstrate how PALS can detect and report in real time ecological changes from episodic disturbances, such as harmful algal blooms.
We used satellite‐linked tags to evaluate dive behavior in offshore bottlenose dolphins (Tursiops spp.) near the island of Bermuda. The data provide evidence that bottlenose dolphins commonly perform both long (>272 s) and deep (>199 m) dives, with the deepest and longest dives being to 1,000 m and 826 s (13.8 min), respectively. The data show a relationship between dive duration and dive depth for dives longer than about 272 s. There was a diurnal pattern to dive behavior, with most dives deeper than 50 m being performed at night; deep diving began at sunset and varied throughout the night. We used the cumulative frequency of dive duration to estimate a behavioral aerobic dive limit (bADL) of around 560–666 s (9.3–11.1 min) in adult dolphins in this population. Dives exceeding the bADL spent significantly longer time in the upper‐most 50 m following a dive as compared with dives less than the bADL. We conclude that the offshore ecotype off Bermuda, unlike the shallow‐diving near‐shore bottlenose dolphin, is a deep‐diving ecotype, and may provide a useful animal model to study extreme diving behavior and adaptations.
Researchers can investigate many aspects of animal ecology through noninvasive photo–identification. Photo–identification is becoming more efficient as matching individuals between photos is increasingly automated. However, the convolutional neural network models that have facilitated this change need many training images to generalize well. As a result, they have often been developed for individual species that meet this threshold. These single‐species methods might underperform, as they ignore potential similarities in identifying characteristics and the photo–identification process among species. In this paper, we introduce a multi‐species photo–identification model based on a state‐of‐the‐art method in human facial recognition, the ArcFace classification head. Our model uses two such heads to jointly classify species and identities, allowing species to share information and parameters within the network. As a demonstration, we trained this model with 50,796 images from 39 catalogues of 24 cetacean species, evaluating its predictive performance on 21,192 test images from the same catalogues. We further evaluated its predictive performance with two external catalogues entirely composed of identities that the model did not see during training. The model achieved a mean average precision (MAP) of 0.869 on the test set. Of these, 10 catalogues representing seven species achieved a MAP score over 0.95. For some species, there was notable variation in performance among catalogues, largely explained by variation in photo quality. Finally, the model appeared to generalize well, with the two external catalogues scoring similarly to their species' counterparts in the larger test set. From our cetacean application, we provide a list of recommendations for potential users of this model, focusing on those with cetacean photo–identification catalogues. For example, users with high quality images of animals identified by dorsal nicks and notches should expect near optimal performance. Users can expect decreasing performance for catalogues with higher proportions of indistinct individuals or poor quality photos. Finally, we note that this model is currently freely available as code in a GitHub repository and as a graphical user interface, with additional functionality for collaborative data management, via Happywhale.com.
Photographic-identification (photo-ID) of bottlenose dolphins using individually distinctive features on the dorsal fin is a well-established and useful tool for tracking individuals; however, this method can be labor-intensive, especially when dealing with large catalogs and/or infrequently surveyed populations. Computer vision algorithms have been developed that can find a fin in an image, characterize the features of the fin, and compare the fin to a catalog of known individuals to generate a ranking of potential matches based on dorsal fin similarity. We examined if and how researchers use computer vision systems in their photo-ID process and developed an experiment to evaluate the performance of the most commonly used, recently developed, systems to date using a long-term photo-ID database of known individuals curated by the Chicago Zoological Society’s Sarasota Dolphin Research Program. Survey results obtained for the “Rise of the machines – Application of automated systems for matching dolphin dorsal fins: current status and future directions” workshop held at the 2019 World Marine Mammal Conference indicated that most researchers still rely on manual methods for comparing unknown dorsal fin images to reference catalogs of known individuals. Experimental evaluation of the finFindR R application, as well as the CurvRank, CurvRank v2, and finFindR implementations in Flukebook suggest that high match rates can be achieved with these systems, with the highest match rates found when only good to excellent quality images of fins with average to high distinctiveness are included in the matching process: for the finFindR R application and the CurvRank and CurvRank v2 algorithms within Flukebook more than 98.92% of correct matches were in the top 50-ranked positions, and more than 91.94% of correct matches were returned in the first ranked position. Our results offer the first comprehensive examination into the performance and accuracy of computer vision algorithms designed to assist with the photo-ID process of bottlenose dolphins and can be used to build trust by researchers hesitant to use these systems. Based on our findings and discussions from the “Rise of the Machines” workshop we provide recommendations for best practices for using computer vision systems for dorsal fin photo-ID.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.