2021
DOI: 10.3390/rs13030371
|View full text |Cite
|
Sign up to set email alerts
|

Stacked Autoencoders Driven by Semi-Supervised Learning for Building Extraction from near Infrared Remote Sensing Imagery

Abstract: In this paper, we propose a Stack Auto-encoder (SAE)-Driven and Semi-Supervised (SSL)-Based Deep Neural Network (DNN) to extract buildings from relatively low-cost satellite near infrared images. The novelty of our scheme is that we employ only an extremely small portion of labeled data for training the deep model which constitutes less than 0.08% of the total data. This way, we significantly reduce the manual effort needed to complete an annotation process, and thus the time required for creating a reliable l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 81 publications
(36 citation statements)
references
References 42 publications
0
31
0
Order By: Relevance
“…Since capturing new ground-truth datasets is tedious, time-consuming, and costly, designing new approaches that effectively deal with limited amounts of labeled training data is a vital research area. Additionally, there are approaches, especially exploiting tensor-based techniques [57][58][59][60] and semi-supervised learning [50], that were proven highly effective and possible to train from small T's, and could be efficiently implemented in FPGAand GPU-based architectures. Our current research efforts are focused on understanding the robustness of CNNs against small training samples in the HSI analysis tasks [105], and will also include confronting classical machine learning and deep learning algorithms in such scenarios.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since capturing new ground-truth datasets is tedious, time-consuming, and costly, designing new approaches that effectively deal with limited amounts of labeled training data is a vital research area. Additionally, there are approaches, especially exploiting tensor-based techniques [57][58][59][60] and semi-supervised learning [50], that were proven highly effective and possible to train from small T's, and could be efficiently implemented in FPGAand GPU-based architectures. Our current research efforts are focused on understanding the robustness of CNNs against small training samples in the HSI analysis tasks [105], and will also include confronting classical machine learning and deep learning algorithms in such scenarios.…”
Section: Discussionmentioning
confidence: 99%
“…Among the solutions that can help deal with limited amount of ground-truth data, are the unsupervised [46] and semi-supervised [47,48] approaches, including active learning [49]. In [50], Protopapadakis et al utilized a very small portion of labeled examples (constituting less than 0.08% of the available data) to train their deep models. Additionally, a semi-supervised technique was used to process unlabeled data, and to estimate soft labels which are later exploited to improve the training process.…”
Section: Hsi Segmentationmentioning
confidence: 99%
“…The application of semisupervised learning technology in satellite remote sensing land classification is still in the development stage. Experiments are only performed on some simple datasets and have not been applied to actual scenes on a large scale, such as image classification [75]- [79] and information extraction [80]- [83]. In large-scale scenarios, three assumptions of semisupervised learning cannot be satisfied if unlabeled samples are added blindly.…”
Section: Semisupervised Learningmentioning
confidence: 99%
“…In addition, they used an F-Beta measure to assist the method in accounting for skewed class distributions. Protopapadakis et al [41] extracted buildings from satellite images with near infrared band, based on a deep learning model called Stacked Autoencoders Driven (SAD) and Semi-Supervised Learning (SSL). To train the deep model, they used only a very small amount of labeled data.…”
Section: Introductionmentioning
confidence: 99%