2018
DOI: 10.3390/rs10091429
|View full text |Cite
|
Sign up to set email alerts
|

Supervised Classification of Multisensor Remotely Sensed Images Using a Deep Learning Framework

Abstract: In this paper, we present a convolutional neural network (CNN)-based method to efficiently combine information from multisensor remotely sensed images for pixel-wise semantic classification. The CNN features obtained from multiple spectral bands are fused at the initial layers of deep neural networks as opposed to final layers. The early fusion architecture has fewer parameters and thereby reduces the computational time and GPU memory during training and inference. We also propose a composite fusion architectu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
41
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 70 publications
(41 citation statements)
references
References 47 publications
(72 reference statements)
0
41
0
Order By: Relevance
“…The reason why we choose Infrared, Red, and Green band data is mainly for consistency with the ISPRS Vaihingen dataset. Besides, this is also for fair comparison with models that use IRRG data only, like RIT_2 [47].…”
Section: Dataset Descriptionmentioning
confidence: 99%
“…The reason why we choose Infrared, Red, and Green band data is mainly for consistency with the ISPRS Vaihingen dataset. Besides, this is also for fair comparison with models that use IRRG data only, like RIT_2 [47].…”
Section: Dataset Descriptionmentioning
confidence: 99%
“…In recent years, deep learning methods have been broadly utilized in various remote sensing image-based applications, including object detection [2,3,20], scene classification [21,22], land cover, and land use mapping [23,24]. Since it was proposed in 2014, deep convolutional neural network (CNN)-based semantic segmentation algorithms [25] have been applied to many pixel-wise remote sensing image analysis tasks, such as road extraction, building extraction, urban land use classification, maritime semantic labeling, vehicle extraction, damage mapping, weed mapping, and other land cover mapping tasks [5,6,[26][27][28][29][30][31]. Several recent studies used semantic segmentation methods for building extraction from remote sensing images [9][10][11][12][32][33][34][35][36][37][38].…”
Section: Introductionmentioning
confidence: 99%
“…() and Piramanayagam et al. ()) which can only be computed after extracting the DEM. The generated building map can be either building regions (such as a binary image) or segmented laser points.…”
Section: Introduction and Previous Researchmentioning
confidence: 99%
“…The main disadvantage, however, lies in an expensive training effort. Nevertheless, an nDSM is an essential input in processing (for example, as used by Marmanis et al (2018) and Piramanayagam et al (2018)) which can only be computed after extracting the DEM. The generated building map can be either building regions (such as a binary image) or segmented laser points.…”
Section: Introduction and Previous Researchmentioning
confidence: 99%