2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01289
|View full text |Cite
|
Sign up to set email alerts
|

LiDAR-based Panoptic Segmentation via Dynamic Shifting Network

Abstract: With the rapid advances of autonomous driving, it becomes critical to equip its sensing system with more holistic 3D perception. However, existing works focus on parsing either the objects (e.g. cars and pedestrians) or scenes (e.g. trees and buildings) from the LiDAR sensor. In this work, we address the task of LiDAR-based panoptic segmentation, which aims to parse both objects and scenes in a unified manner. As one of the first endeavors towards this new challenging task, we propose the Dynamic Shifting Netw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 55 publications
(21 citation statements)
references
References 53 publications
0
21
0
Order By: Relevance
“…SemanticKITTI. We compare our method with RangeNet++ [33] + PointPillars [23], LPSAD [32], KPConv [41] + PointPillars [23], Panoster [16], Panoptic-PolarNet [48], DS-Net [18], EfficientLPS [38], GP-S3Net [37], SCAN [46] and Panoptic-PHNet [24]. Table 1 shows comparisons of LiDAR panoptic segmentation performance on the SemanticKITTI test split.…”
Section: Benchmark Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…SemanticKITTI. We compare our method with RangeNet++ [33] + PointPillars [23], LPSAD [32], KPConv [41] + PointPillars [23], Panoster [16], Panoptic-PolarNet [48], DS-Net [18], EfficientLPS [38], GP-S3Net [37], SCAN [46] and Panoptic-PHNet [24]. Table 1 shows comparisons of LiDAR panoptic segmentation performance on the SemanticKITTI test split.…”
Section: Benchmark Resultsmentioning
confidence: 99%
“…Compared with LiDAR-based 3D semantic segmentation, LiDAR-based panoptic segmentation further segments foreground point clouds into different instances. Most previous top-tier works [48,18,38,37,46,24] start from this difference and produce panoptic predictions following a three-stage paradigm, i.e., first predict semantic results, then separate instances based on semantic predictions, and finally fuse the two results. This paradigm makes the panoptic segmentation performance inevitably bounded by semantic predictions and requires cumbersome post-processing.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…With the availability of RGB-D sensors, we have made progress on tasks like [16,17,27,40], 3D instance segmentation [11,14,15,22,26,54] and 3D object detection [35,41,59] which work on indoor scenes. Similarly, the accessibility to modern LiDAR sensors has made it possible to work with outdoor 3D scenes, where again recent works have targeted the tasks of 3D object detection [43,44,53,55], semantic segmentation [33,47,57,61], panoptic segmentation [5,21,34,60] and multi-object tracking [3,12,50,56]. Perceiving outdoor 3D environments from LiDAR data is particularly relevant for robotics and autonomous driving applications, and hence has gained significant traction in the recent past.…”
Section: Introductionmentioning
confidence: 99%
“…LiDAR semantic segmentation that could provide a thorough scene understanding, has attracted extensive studies. Most existing works usually pay much attention on point-to-structure representations, including spherical projection [1,2], bird-eye-view projection [3] and 3D voxelization [4,5], and another group of methods focus on the network architecture design [6,4,7]. However, these methods often neglect the inherit difficulty caused by the long-tailed data distribution.…”
Section: Introductionmentioning
confidence: 99%