2022
DOI: 10.1255/jsi.2022.a11
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of 2D and 3D semantic segmentation in urban areas using fused hyperspectral and lidar data

Abstract: Multisensor data fusion has become a hot topic in the remote sensing research community. This is thanks to significant technological advances and the ability to extract information that would have been challenging with a single sensor. However, sensory enhancement requires advanced analysis that enables deep learning. A framework is designed to effectively fuse hyperspectral and lidar data for semantic segmentation in the urban environment. Our work proposes a method of reducing dimensions by exploring the mos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(13 citation statements)
references
References 51 publications
0
8
0
Order By: Relevance
“…Ground truth data were unavailable for low and high vegetation due to high dynamic and seasonal differences. Therefore, these classes were extracted semi-automatically, calculating the Normalised Difference Vegetation Index (NDVI) for the HS scene [48] and distinguished high and low vegetation based on raster-based LiDAR features, relying on the method from Kuras et al [35] (Figure 1d). The main features for differentiating high and low vegetation were selected prior to analysis based on knowledge and experience.…”
Section: Ground Truthmentioning
confidence: 99%
See 4 more Smart Citations
“…Ground truth data were unavailable for low and high vegetation due to high dynamic and seasonal differences. Therefore, these classes were extracted semi-automatically, calculating the Normalised Difference Vegetation Index (NDVI) for the HS scene [48] and distinguished high and low vegetation based on raster-based LiDAR features, relying on the method from Kuras et al [35] (Figure 1d). The main features for differentiating high and low vegetation were selected prior to analysis based on knowledge and experience.…”
Section: Ground Truthmentioning
confidence: 99%
“…The final input to semantic segmentation algorithms consists of abundance maps from HS and LiDAR data. We considered the 2D ResU-Net model architectures in this study [35,53,54], comparing the segmentation process with and without training data augmentation for 2019 (Figure 2, box 3) and 2021 (Figure 2, box 6) datasets without model parameter regularizations. The original U-Net consists of an encoder part with multiple blocks of convolutions and max pools for feature extraction and a corresponding decoder with transposed convolutions for upscaling after each convolution block [59].…”
Section: Semantic Segmentationmentioning
confidence: 99%
See 3 more Smart Citations