Aircraft detection from very high resolution (VHR) remote sensing images has been drawing increasing interest in recent years due to the successful civil and military applications. However, several challenges still exist: 1) extracting the high-level features and the hierarchical feature representations of the objects is difficult; 2) manual annotation of the objects in large image sets is generally expensive and sometimes unreliable; and 3) locating objects within such a large image is difficult and time consuming. In this paper, we propose a weakly supervised learning framework based on coupled convolutional neural networks (CNNs) for aircraft detection, which can simultaneously solve these problems. We first develop a CNN-based method to extract the high-level features and the hierarchical feature representations of the objects. We then employ an iterative weakly supervised learning framework to automatically mine and augment the training data set from the original image. We propose a coupled CNN method, which combines a candidate region proposal network and a localization network to extract the proposals and simultaneously locate the aircraft, which is more efficient and accurate, even in largescale VHR images. In the experiments, the proposed method was applied to three challenging high-resolution data sets: the Sydney International Airport data set, the Tokyo Haneda Airport data set, and the Berlin Tegel Airport data set. The extensive experimental results confirm that the proposed method can achieve a higher detection accuracy than the other methods.Index Terms-Aircraft detection, convolutional neural networks (CNNs), weakly supervised learning.
Abstract:Deep neural networks (DNNs) face many problems in the very high resolution remote sensing (VHRRS) per-pixel classification field. Among the problems is the fact that as the depth of the network increases, gradient disappearance influences classification accuracy and the corresponding increasing number of parameters to be learned increases the possibility of overfitting, especially when only a small amount of VHRRS labeled samples are acquired for training. Further, the hidden layers in DNNs are not transparent enough, which results in extracted features not being sufficiently discriminative and significant amounts of redundancy. This paper proposes a novel depth-width-reinforced DNN that solves these problems to produce better per-pixel classification results in VHRRS. In the proposed method, densely connected neural networks and internal classifiers are combined to build a deeper network and balance the network depth and performance. This strengthens the gradients, decreases negative effects from gradient disappearance as the network depth increases and enhances the transparency of hidden layers, making extracted features more discriminative and reducing the risk of overfitting. In addition, the proposed method uses multi-scale filters to create a wider neural network. The depth of the filters from each scale is controlled to decrease redundancy and the multi-scale filters enable utilization of joint spatio-spectral information and diverse local spatial structure simultaneously. Furthermore, the concept of network in network is applied to better fuse the deeper and wider designs, making the network operate more smoothly. The results of experiments conducted on BJ02, GF02, geoeye and quickbird satellite images verify the efficacy of the proposed method. The proposed method not only achieves competitive classification results but also proves that the network can continue to be robust and perform well even while the amount of labeled training samples is decreasing, which fits the small training samples situation faced by VHRRS per-pixel classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.