Wild birds are monitored with the important objectives of identifying their habitats and estimating the size of their populations. Especially in the case of migratory bird, they are significantly recorded during specific periods of time to forecast any possible spread of animal disease such as avian influenza. This study led to the construction of deep-learning-based object-detection models with the aid of aerial photographs collected by an unmanned aerial vehicle (UAV). The dataset containing the aerial photographs includes diverse images of birds in various bird habitats and in the vicinity of lakes and on farmland. In addition, aerial images of bird decoys are captured to achieve various bird patterns and more accurate bird information. Bird detection models such as Faster Region-based Convolutional Neural Network (R-CNN), Region-based Fully Convolutional Network (R-FCN), Single Shot MultiBox Detector (SSD), Retinanet, and You Only Look Once (YOLO) were created and the performance of all models was estimated by comparing their computing speed and average precision. The test results show Faster R-CNN to be the most accurate and YOLO to be the fastest among the models. The combined results demonstrate that the use of deep-learning-based detection methods in combination with UAV aerial imagery is fairly suitable for bird detection in various environments.
Diverse pheromones and pheromone-based traps, as well as images acquired from insects captured by pheromone-based traps, have been studied and developed to monitor the presence and abundance of pests and to protect plants. The purpose of this study is to construct models that detect three species of pest moths in pheromone trap images using deep learning object detection methods and compare their speed and accuracy. Moth images in pheromone traps were collected for training and evaluation of deep learning detectors. Collected images were then subjected to a labeling process that defines the ground truths of target objects for their box locations and classes. Because there were a few negative objects in the dataset, non-target insects were labeled as unknown class and images of non-target insects were added to the dataset. Moreover, data augmentation methods were applied to the training process, and parameters of detectors that were pre-trained with the COCO dataset were used as initial parameter values. Seven detectors—Faster R-CNN ResNet 101, Faster R-CNN ResNet 50, Faster R-CNN Inception v.2, R-FCN ResNet 101, Retinanet ResNet 50, Retinanet Mobile v.2, and SSD Inception v.2 were trained and evaluated. Faster R-CNN ResNet 101 detector exhibited the highest accuracy (mAP as 90.25), and seven different detector types showed different accuracy and speed. Furthermore, when unexpected insects were included in the collected images, a four-class detector with an unknown class (non-target insect) showed lower detection error than a three-class detector.
The black pine bast scale, M. thunbergianae, is a major insect pest of black pine and causes serious environmental and economic losses in forests. Therefore, it is essential to monitor the occurrence and population of M. thunbergianae, and a monitoring method using a pheromone trap is commonly employed. Because the counting of insects performed by humans in these pheromone traps is labor intensive and time consuming, this study proposes automated deep learning counting algorithms using pheromone trap images. The pheromone traps collected in the field were photographed in the laboratory, and the images were used for training, validation, and testing of the detection models. In addition, the image cropping method was applied for the successful detection of small objects in the image, considering the small size of M. thunbergianae in trap images. The detection and counting performance were evaluated and compared for a total of 16 models under eight model conditions and two cropping conditions, and a counting accuracy of 95% or more was shown in most models. This result shows that the artificial intelligence-based pest counting method proposed in this study is suitable for constant and accurate monitoring of insect pests.
An infrared lifetime thermal imaging technique for the measurement of lettuce seed viability was evaluated. Thermal emission signals from mid-infrared images of healthy seeds and seeds aged for 24, 48, and 72 h were obtained and reconstructed using regression analysis. The emission signals were fitted with a two-term exponential model that had two amplitudes and two time variables as lifetime parameters. The lifetime thermal decay parameters were significantly different for seeds with different aging times. Single-seed viability was visualized using thermal lifetime images constructed from the calculated lifetime parameter values. The time-dependent thermal signal decay characteristics, along with the decay amplitude and delay time images, can be used to distinguish aged lettuce seeds from normal seeds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.