The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2019
DOI: 10.3390/rs11242961
|View full text |Cite
|
Sign up to set email alerts
|

A Point-Wise LiDAR and Image Multimodal Fusion Network (PMNet) for Aerial Point Cloud 3D Semantic Segmentation

Abstract: 3D semantic segmentation of point cloud aims at assigning semantic labels to each point by utilizing and respecting the 3D representation of the data. Detailed 3D semantic segmentation of urban areas can assist policymakers, insurance companies, governmental agencies for applications such as urban growth assessment, disaster management, and traffic supervision. The recent proliferation of remote sensing techniques has led to producing high resolution multimodal geospatial data. Nonetheless, currently, only lim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(14 citation statements)
references
References 40 publications
(61 reference statements)
0
14
0
Order By: Relevance
“…To address this problem, a constrained network of points [21] has been adopted. The lost information was complemented by structure awareness [22], multimodal aggregation [23,24], an incremental approach [25], and fragment integrity [26]. However, the structural features and the effect of large-scale segmentation need to be improved.…”
Section: B the Methods Of Deep Learningmentioning
confidence: 99%
“…To address this problem, a constrained network of points [21] has been adopted. The lost information was complemented by structure awareness [22], multimodal aggregation [23,24], an incremental approach [25], and fragment integrity [26]. However, the structural features and the effect of large-scale segmentation need to be improved.…”
Section: B the Methods Of Deep Learningmentioning
confidence: 99%
“…By applying RGB features, overall accuracy increased by 2%, from 86% to 88%. Additionally, Poliyapram et al [24] propose end-to-end point-wise LiDAR and a so-called image multimodal fusion network (PMNet) for classification of an ALS point cloud of Osaka city in combination with aerial image RGB features. Their results show that the combination of intensity and RGB features could improve overall accuracy from 65% to 79%, while the performance in identifying buildings improved by 4%.…”
Section: Related Workmentioning
confidence: 99%
“…Some of the normalized features, which are visible in both FLIR and LLTV cameras, and fed to the network are straight edges, winding edges, anisotropy and contrast information from each image. In [40], the authors propose a novel deep learning-based LIDAR and image fusion neural network (PMNet) for extracting meaningful information from aerial images and 3D point clouds. The fusion procedure uses spatial correspondence-point-wise fusion-which is done at feature level and shows improved performance with low memory usage and less computational parameters.…”
Section: Black Box Sensor Fusionmentioning
confidence: 99%