This letter presents a method for satellite image classification aiming at the following two objectives: 1) involving visual attention into the satellite image classification; biologically inspired saliency information is exploited in the phase of the image representation, making our method more concentrated on the interesting objects and structures, and 2) handling the satellite image classification without the learning phase. A two-layer sparse coding (TSC) model is designed to discover the "true" neighbors of the images and bypass the intensive learning phase of the satellite image classification. The underlying philosophy of the TSC is that an image can be more sparsely reconstructed via the images (sparse I) belonging to the same category (sparse II). The images are classified according to a newly defined "image-to-category" similarity based on the coding coefficients. Requiring no training phase, our method achieves very promising results. The experimental comparisons are shown on a real satellite image database.Index Terms-Satellite image classification, two-layer sparse coding (TSC), visual attention.
Daily acquisition of large amounts of aerial and satellite images has facilitated subsequent automatic interpretations of these images. One such interpretation is object detection. Despite the great progress made in this domain, the detection of multi-scale objects, especially small objects in high resolution satellite (HRS) images, has not been adequately explored. As a result, the detection performance turns out to be poor. To address this problem, we first propose a unified multi-scale convolutional neural network (CNN) for geospatial object detection in HRS images. It consists of a multi-scale object proposal network and a multi-scale object detection network, both of which share a multi-scale base network. The base network can produce feature maps with different receptive fields to be responsible for objects with different scales. Then, we use the multi-scale object proposal network to generate high quality object proposals from the feature maps. Finally, we use these object proposals with the multi-scale object detection network to train a good object detector. Comprehensive evaluations on a publicly available remote sensing object detection dataset and comparisons with several state-of-the-art approaches demonstrate the effectiveness of the presented method. The proposed method achieves the best mean average precision (mAP) value of 89.6%, runs at 10 frames per second (FPS) on a GTX 1080Ti GPU.
This paper presents a novel method addressing the classification task of satellite images when limited labeled data is available together with a large amount of unlabeled data. Instead of using semi-supervised classifiers, we solve the problem by learning a high-level features, called semisupervised ensemble projection (SSEP). More precisely, we propose to represent an image by projecting it onto an ensemble of weak training (WT) sets sampled from a Gaussian approximation of multiple feature spaces. Given a set of images with limited labeled ones, we first extract preliminary features, e.g., color and textures, to form a low-level image description. We then propose a new semisupervised sampling algorithm to build an ensemble of informative WT sets by exploiting these feature spaces with a Gaussian normal affinity, which ensures both the reliability and diversity of the ensemble. Discriminative functions are subsequently learned from the resulting WT sets, and each image is represented by concatenating its projected values onto such WT sets for final classification. Moreover, we consider that the potential redundant information existed in SSEP and use sparse coding to reduce it. Experiments on high-resolution remote sensing data demonstrate the efficiency of the proposed method.Index Terms-Ensemble projection (EP), feature representation, image classification, semisupervised learning.
In the past decade, object detection has achieved significant progress in natural images but not in aerial images, due to the massive variations in the scale and orientation of objects caused by the bird's-eye view of aerial images. More importantly, the lack of large-scale benchmarks has become a major obstacle to the development of object detection in aerial images (ODAI). In this paper, we present a large-scale Dataset of Object deTection in Aerial images (DOTA) and comprehensive baselines for ODAI. The proposed DOTA dataset contains 1,793,658 object instances of 18 categories of oriented-bounding-box annotations collected from 11,268 aerial images. Based on this large-scale and well-annotated dataset, we build baselines covering 10 state-of-the-art algorithms with over 70 configurations, where the speed and accuracy performances of each model have been evaluated. Furthermore, we provide a code library for ODAI and build a website for evaluating different algorithms. Previous challenges run on DOTA have attracted more than 1300 teams worldwide. We believe that the expanded large-scale DOTA dataset, the extensive baselines, the code library and the challenges can facilitate the designs of robust algorithms and reproducible research on the problem of object detection in aerial images.
In-season site-specific nitrogen (N) management is a promising strategy to improve crop N use efficiency and reduce risks of environmental contamination. To successfully implement such precision management strategies, it is important to accurately estimate yield potential without additional topdressing N application (YP 0 ) as well as precisely assess the responsiveness to additional N application (RI) during the growing season. Previous research has mainly used normalized difference vegetation index (NDVI) or ratio vegetation index (RVI) obtained from GreenSeeker active crop canopy sensor with two fixed bands in red and near-infrared (NIR) spectrums to estimate these two parameters. The development of three-band Crop Circle active sensor provides a potential to improve in-season estimation of YP 0 and RI. The objectives of this study were twofold: (1) identify important vegetation indices obtained from Crop Circle ACS-470 sensor for estimating rice YP 0 and RI; and (2) evaluate their potential improvements over GreenSeeker NDVI and RVI. Four site-years of field N rate experiments were conducted in 2012 and 2013 at the Jiansanjiang Experiment Station of China Agricultural University located in Northeast China. The GreenSeeker and Crop Circle ACS-470 active canopy sensor with green, red edge, and NIR bands were used to collect rice canopy reflectance data at different key growth stages. The results indicated that both the GreenSeeker (best R 2 = 0.66 and 0.70, respectively) and Crop Circle (best R 2 = 0.71 and 0.77, respectively) sensors worked well for estimating YP 0 and RI at the stem elongation stage. At the booting stage, Crop Circle red edge optimized soil adjusted vegetation index (REOSAVI, R 2 = 0.82) and green ratio vegetation index (R 2 = 0.73) explained 26 and 22 % more variability in YP 0 and RI, respectively, than GreenSeeker NDVI or RVI. At the heading stage, the GreenSeeker sensor indices became saturated and consequently could not be used for YP 0 or RI estimation, while Crop Circle REOSAVI and normalized green index could still explain more than 70 % of YP 0 and RI variability. It is concluded that both sensors performed similarly at the stem elongation stage, but significantly better results were obtained by the Crop Circle sensor at the booting and heading stages. Furthermore, the results revealed that Crop Circle green band-based vegetation indices performed well for RI estimation while the red edge-based vegetation indices were the best for estimating YP 0 at later growth stages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.