Effective wildlife management relies on the accurate and precise detection of individual animals. These can be challenging data to collect for many cryptic species, particularly those that live in complex structural environments. This study introduces a new automated method for detection using published object detection algorithms to detect their heat signatures in RPAS-derived thermal imaging. As an initial case study we used this new approach to detect koalas ( Phascolarctus cinereus ), and validated the approach using ground surveys of tracked radio-collared koalas in Petrie, Queensland. The automated method yielded a higher probability of detection (68–100%), higher precision (43–71%), lower root mean square error (RMSE), and lower mean absolute error (MAE) than manual assessment of the RPAS-derived thermal imagery in a comparable amount of time. This new approach allows for more reliable, less invasive detection of koalas in their natural habitat. This new detection methodology has great potential to inform and improve management decisions for threatened species, and other difficult to survey species.
1. Accurate detection of individual animals is integral to the management of vulnerable wildlife species, but often difficult and costly to achieve for species that occur over wide or inaccessible areas or engage in cryptic behaviours. There is a growing acceptance of the use of drones (also known as unmanned aerial vehicles, UAVs and remotely piloted aircraft systems, RPAS) to detect wildlife, largely because of the capacity for drones to rapidly cover large areas compared to ground survey methods. While drones can aid the capture of large amounts of imagery, detection requires either manual evaluation of the imagery or automated detection using machine learning algorithms.While manual evaluation of drone-acquired imagery is possible and sometimes necessary, the powerful combination of drones with automated detection of wildlife in this imagery is much faster and, in some cases, more accurate than using human observers.Despite the great potential of this emerging approach, most attention to date has been paid to the development of algorithms, and little is known about the constraints around successful detection (P. W. J. Baxter, and G. Hamilton, 2018, Ecosphere, 9, e02194).2. We reviewed studies that were conducted over the last 5 years in which wildlife species were detected automatically in drone-acquired imagery to understand how technological constraints, environmental conditions and ecological traits of target species impact detection with automated methods.3. From this review, we found that automated detection could be achieved for a wider range of species and under a greater variety of environmental conditions than reported in previous reviews of automated and manual detection in droneacquired imagery. A high probability of automated detection could be achieved efficiently using fixed-wing platforms and RGB sensors for species that were large and occurred in open and homogeneous environments with little vegetation or variation in topography while infrared sensors and multirotor platforms were necessary to successfully detect small, elusive species in complex habitats.4. The insight gained in this review could allow conservation managers to use drones and machine learning algorithms more accurately and efficiently to conduct abundance data on vulnerable populations that is critical to their conservation.
Drones and machine learning‐based automated detection methods are being used by ecologists to conduct wildlife surveys with increasing frequency. When traditional survey methods have been evaluated, a range of factors have been found to influence detection probabilities, including individual differences among conspecific animals, which can thus introduce biases into survey counts. There has been no such evaluation of drone‐based surveys using automated detection in a natural setting. This is important to establish since any biases in counts made using these methods will need to be accounted for, to provide accurate data and improve decision‐making for threatened species. In this study, a rare opportunity to survey a ground‐truthed, individually marked population of 48 koalas in their natural habitat allowed for direct comparison of the factors impacting detection probability in both ground observation and drone surveys with manual and automated detection. We found that sex and host tree preferences impacted detection in ground surveys and in manual analysis of drone imagery with female koalas likely to be under‐represented, and koalas higher in taller trees detected less frequently when present. Tree species composition of a forest stand also impacted on detections. In contrast, none of these factors impacted on automated detection. This suggests that the combination of drone‐captured imagery and machine learning does not suffer from the same biases that affect conventional ground surveys. This provides further evidence that drones and machine learning are promising tools for gathering reliable detection data to better inform the management of threatened populations.
Reliable estimates of abundance are critical in effectively managing threatened species, but the feasibility of integrating data from wildlife surveys completed using advanced technologies such as remotely piloted aircraft systems (RPAS) and machine learning into abundance estimation methods such as N‐mixture modeling is largely unknown due to the unique sources of detection errors associated with these technologies. We evaluated two modeling approaches for estimating the abundance of koalas detected automatically in RPAS imagery: (a) a generalized N‐mixture model and (b) a modified Horvitz–Thompson (H‐T) estimator method combining generalized linear models and generalized additive models for overall probability of detection, false detection, and duplicate detection. The final estimates from each model were compared to the true number of koalas present as determined by telemetry‐assisted ground surveys. The modified H‐T estimator approach performed best, with the true count of koalas captured within the 95% confidence intervals around the abundance estimates in all 4 surveys in the testing dataset ( n = 138 detected objects), a particularly strong result given the difficulty in attaining accuracy found with previous methods. The results suggested that N‐mixture models in their current form may not be the most appropriate approach to estimating the abundance of wildlife detected in RPAS surveys with automated detection, and accurate estimates could be made with approaches that account for spurious detections.
IntroductionPlant image datasets have the potential to greatly improve our understanding of the phenotypic response of plants to environmental and genetic factors. However, manual data extraction from such datasets are known to be time-consuming and resource intensive. Therefore, the development of efficient and reliable machine learning methods for extracting phenotype data from plant imagery is crucial.MethodsIn this paper, a current gold standard computed vision method for detecting and segmenting objects in three-dimensional imagery (StartDist-3D) is applied to X-ray micro-computed tomography scans of oilseed rape (Brassica napus) mature pods.ResultsWith a relatively minimal training effort, this fine-tuned StarDist-3D model accurately detected (Validation F1-score = 96.3%,Testing F1-score = 99.3%) and predicted the shape (mean matched score = 90%) of seeds.DiscussionThis method then allowed rapid extraction of data on the number, size, shape, seed spacing and seed location in specific valves that can be integrated into models of plant development or crop yield. Additionally, the fine-tuned StarDist-3D provides an efficient way to create a dataset of segmented images of individual seeds that could be used to further explore the factors affecting seed development, abortion and maturation synchrony within the pod. There is also potential for the fine-tuned Stardist-3D method to be applied to imagery of seeds from other plant species, as well as imagery of similarly shaped plant structures such as beans or wheat grains, provided the structures targeted for detection and segmentation can be described as star-convex polygons.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.