Recently, automatic visual data understanding from drone platforms becomes highly demanding. To facilitate the study, the Vision Meets Drone Object Detection in Image Challenge is held the second time in conjunction with the 17-th International Conference on Computer Vision (ICCV 2019), focuses on image object detection on drones. Results of 33 object detection algorithms are presented. For each participating detector, a short description is provided in the appendix. Our goal is to advance the state-of-the-art detection algorithms and provide a comprehensive evaluation platform for them. The evaluation protocol of the VisDrone-DET2019 Challenge and the comparison results of all the submitted detectors on the released dataset are publicly available at the website: http: //www.aiskyeye.com/. The results demonstrate that there still remains a large room for improvement for object detection algorithms on drones.
The larvae of some trunk-boring beetles barely leave traces on the outside of trunks when feeding within, rendering the detection of them rather difficult. One approach to solving this problem involves the use of a probe to pick up boring vibrations inside the trunk and distinguish larvae activity according to the vibrations. Clean boring vibration signals without noise are critical for accurate judgement. Unfortunately, these environments are filled with natural or artificial noise. To address this issue, we constructed a boring vibration enhancement model named VibDenoiser, which makes a significant contribution to this rarely studied domain. This model is built using the technology of deep learning-based speech enhancement. It consists of convolutional encoder and decoder layers with skip connections, and two layers of SRU++ for sequence modeling. The dataset constructed for study is made up of boring vibrations of Agrilus planipennis Fairmaire, 1888 (Coleoptera: Buprestidae) and environmental noise. Our VibDenoiser achieves an improvement of 18.57 in SNR, and it runs in real-time on a laptop CPU. The accuracy of the four classification models increased by a large margin using vibration clips enhanced by our model. The results demonstrate the great enhancement performance of our model, and the contribution of our work to better boring vibration detection.
Traditional image-centered methods of plant identification could be confused due to various views, uneven illuminations, and growth cycles. To tolerate the significant intraclass variances, the convolutional recurrent neural networks (C-RNNs) are proposed for observation-centered plant identification to mimic human behaviors. The C-RNN model is composed of two components: the convolutional neural network (CNN) backbone is used as a feature extractor for images, and the recurrent neural network (RNN) units are built to synthesize multiview features from each image for final prediction. Extensive experiments are conducted to explore the best combination of CNN and RNN. All models are trained end-to-end with 1 to 3 plant images of the same observation by truncated back propagation through time. The experiments demonstrate that the combination of MobileNet and Gated Recurrent Unit (GRU) is the best trade-off of classification accuracy and computational overhead on the Flavia dataset. On the holdout test set, the mean 10-fold accuracy with 1, 2, and 3 input leaves reached 99.53%, 100.00%, and 100.00%, respectively. On the BJFU100 dataset, the C-RNN model achieves the classification rate of 99.65% by two-stage end-to-end training. The observation-centered method based on the C-RNNs shows potential to further improve plant identification accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.