2024
DOI: 10.3390/rs16071246
|View full text |Cite
|
Sign up to set email alerts
|

PointMM: Point Cloud Semantic Segmentation CNN under Multi-Spatial Feature Encoding and Multi-Head Attention Pooling

Ruixing Chen,
Jun Wu,
Ying Luo
et al.

Abstract: For the actual collected point cloud data, there are widespread challenges such as semantic inconsistency, density variations, and sparse spatial distribution. A network called PointMM is developed in this study to enhance the accuracy of point cloud semantic segmentation in complex scenes. The main contribution of PointMM involves two aspects: (1) Multi-spatial feature encoding. We leverage a novel feature encoding module to learn multi-spatial features from the neighborhood point set obtained by k-nearest ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 52 publications
0
0
0
Order By: Relevance
“…Although the vegetation and bare-Earth training point clouds were different sizes(22,451,839 vegetation points and 102,539,815 bare-Earth points), class imbalance was addressed here by randomly down-sampling the bare-Earth points to equal the number of vegetation points prior to model training. A similar approach to balancing training data for point cloud segmentation was employed by[67]. The balanced training classes were then used to train and evaluate the MLP models.…”
mentioning
confidence: 99%
“…Although the vegetation and bare-Earth training point clouds were different sizes(22,451,839 vegetation points and 102,539,815 bare-Earth points), class imbalance was addressed here by randomly down-sampling the bare-Earth points to equal the number of vegetation points prior to model training. A similar approach to balancing training data for point cloud segmentation was employed by[67]. The balanced training classes were then used to train and evaluate the MLP models.…”
mentioning
confidence: 99%