2022
DOI: 10.1109/tits.2022.3145588
|View full text |Cite
|
Sign up to set email alerts
|

MASS: Multi-Attentional Semantic Segmentation of LiDAR Data for Dense Top-View Understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
2

Relationship

2
8

Authors

Journals

citations
Cited by 36 publications
(8 citation statements)
references
References 59 publications
0
8
0
Order By: Relevance
“…For example, Google’s Waymo uses LiDAR for detecting unexpected objects [ 27 ]. To accurately classify and recognize objects from data detected by LiDAR, various models have been developed, and algorithm-development studies using semantic segmentation have been conducted [ 28 , 29 ]. Moreover, studies have been investigating the detectability of objects by LiDAR.…”
Section: Literature Reviewmentioning
confidence: 99%
“…For example, Google’s Waymo uses LiDAR for detecting unexpected objects [ 27 ]. To accurately classify and recognize objects from data detected by LiDAR, various models have been developed, and algorithm-development studies using semantic segmentation have been conducted [ 28 , 29 ]. Moreover, studies have been investigating the detectability of objects by LiDAR.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Although these learning-based instance segmentation methods can accurately extract objects, they cannot recognize the ground or walls that are also important for autonomous navigation. Unlike the instance segmentation, others proposed the semantic segmentation method that can label the point-wise class [18]- [20]. These methods extract the ground without considering traversability.…”
Section: Learning-based Segmentationmentioning
confidence: 99%
“…lem is to use multi-camera or LiDAR systems [8], [9], [10]. These systems, however, introduce a further level of complexity as they either need to stitch images of multiple camera sources or resort to the combination of the RGB image information with LiDAR views in order to get a holistic view of the entire surrounding [11].…”
Section: Robust Model Baseline Modelmentioning
confidence: 99%