2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00548
|View full text |Cite
|
Sign up to set email alerts
|

4D Panoptic LiDAR Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
44
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 48 publications
(48 citation statements)
references
References 60 publications
0
44
0
Order By: Relevance
“…MOTS [18] proposed the multi-object tracking and segmentation task for the first time and MOTP [19] extended it to panoptic segmentation and 3D domain. 4D-PLS [3] explored tracking-by-segmentation paradigm, which can associate objects at point level. The core of 4D-PLS is the density-based clustering, which benefits from their semantic features.…”
Section: B 3d Multi-object Trackingmentioning
confidence: 99%
See 1 more Smart Citation
“…MOTS [18] proposed the multi-object tracking and segmentation task for the first time and MOTP [19] extended it to panoptic segmentation and 3D domain. 4D-PLS [3] explored tracking-by-segmentation paradigm, which can associate objects at point level. The core of 4D-PLS is the density-based clustering, which benefits from their semantic features.…”
Section: B 3d Multi-object Trackingmentioning
confidence: 99%
“…The training processes of these methods need the bounding box labels for the moving targets in each LiDAR frame. The 4D Panoptic LiDAR Segmentation (4D-PLS) [3] is based on a trackingby-segmentation paradigm and has shown remarkable performance in the LiDAR-based MOT task. However, the labeling of the segmentation task is more expensive, even with many advanced tools [4].…”
Section: Introductionmentioning
confidence: 99%
“…However, KITTI does not provide dense (per-point) 3D labels on the point cloud. Closely related to our work, ApolloScape [43] annotates static scene elements in the 3D space 4 and projects them to the 2D image space, followed by manual annotation of dynamic objects in images. In this work, we annotate both static and dynamic objects in 3D, providing coherent annotations for dynamic objects both in 2D and 3D.…”
Section: Datasetsmentioning
confidence: 99%
“…For instance, Cityscapes [24] offers a benchmark suite for pixel and instance-level semantic segmentation as well as 3D vehicle detection. SemanticKITTI [4], [7] hosts lidar segmentation challenges to predict the category of every point. For datasets including both 2D and 3D annotations, KITTI [31], nuScenes [17], and ApolloScape [43] provide benchmarks on a set of vision tasks including detection, stereo, localization, multi-object tracking, and segmentation in both 2D and 3D, etc.…”
Section: Datasetsmentioning
confidence: 99%
See 1 more Smart Citation