2015
DOI: 10.1109/tgrs.2015.2400449
|View full text |Cite
|
Sign up to set email alerts
|

Learning High-level Features for Satellite Image Classification With Limited Labeled Samples

Abstract: This paper presents a novel method addressing the classification task of satellite images when limited labeled data is available together with a large amount of unlabeled data. Instead of using semi-supervised classifiers, we solve the problem by learning a high-level features, called semisupervised ensemble projection (SSEP). More precisely, we propose to represent an image by projecting it onto an ensemble of weak training (WT) sets sampled from a Gaussian approximation of multiple feature spaces. Given a se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
61
0
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 128 publications
(62 citation statements)
references
References 36 publications
0
61
0
1
Order By: Relevance
“…The first dataset is an object-based scene: the 21-class UC Merced dataset [55], which was manually generated from large images from the USGS National Map Urban Area Imagery collection for various urban areas around the United States. The pixel resolution of this public domain imagery is approximately 0.30 m. The second dataset contains two large HRS scenes: a large scene of Tongzhou (Scene-TZ) [56] and a large scene near the Tucson airport (Scene-AT) [57], which were both captured by the GeoEye-1 satellite sensor. The GeoEye-1 satellite includes a high-resolution CCD camera, which acquires images with a spatial resolution up to 0.41 m in the panchromatic band and of up to 1.65 m in the multi-spectral band.…”
Section: Resultsmentioning
confidence: 99%
“…The first dataset is an object-based scene: the 21-class UC Merced dataset [55], which was manually generated from large images from the USGS National Map Urban Area Imagery collection for various urban areas around the United States. The pixel resolution of this public domain imagery is approximately 0.30 m. The second dataset contains two large HRS scenes: a large scene of Tongzhou (Scene-TZ) [56] and a large scene near the Tucson airport (Scene-AT) [57], which were both captured by the GeoEye-1 satellite sensor. The GeoEye-1 satellite includes a high-resolution CCD camera, which acquires images with a spatial resolution up to 0.41 m in the panchromatic band and of up to 1.65 m in the multi-spectral band.…”
Section: Resultsmentioning
confidence: 99%
“…Similar to the process of forming , another graph ℎ can be constructed based on activation map , but the weight assigned to the edges is defined as Equation (3). Again, a Markov chain on ℎ is obtained to help obtain the normalization map namely the final saliency map .…”
Section: Saliency Mapmentioning
confidence: 99%
“…The interpretation of such huge amount of RS imagery is a challenging task of significant sense for disaster monitoring, urban planning, traffic controlling and so on [1][2][3][4][5]. RS scene classification, which aims at automatically classifying extracted sub-regions of the scenes into a set of semantic categories, is an effective method for RS image interpreting [6,7].…”
Section: Introductionmentioning
confidence: 99%
“…Remote sensing image processing achieves great advances in recent years, from low-level tasks, such as segmentation, to high-level ones, such as classification [1][2][3][4][5][6][7]. However, the task becomes incrementally more difficult as the level of abstraction increases, going from pixels, to objects, and then scenes.…”
Section: Introductionmentioning
confidence: 99%
“…However, the task becomes incrementally more difficult as the level of abstraction increases, going from pixels, to objects, and then scenes. Classifying remote sensing images according to a set of semantic categories is a very challenging problem, because of high intra-class variability and low inter-class distance [5][6][7][8][9]. Different objects may appear at different scales and orientations in a given class, and the same objects may be found in images belonging to different classes.…”
Section: Introductionmentioning
confidence: 99%