2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00291
|View full text |Cite
|
Sign up to set email alerts
|

RangeDet: In Defense of Range View for LiDAR-based 3D Object Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
60
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 171 publications
(61 citation statements)
references
References 34 publications
1
60
0
Order By: Relevance
“…There exists some other works that exploit variability in depth along the vertical axis [2,3,5,6,11]. Though their work is conceptually similar to ours, they report improvements in contrast to this work.…”
Section: Related Worksupporting
confidence: 52%
“…There exists some other works that exploit variability in depth along the vertical axis [2,3,5,6,11]. Though their work is conceptually similar to ours, they report improvements in contrast to this work.…”
Section: Related Worksupporting
confidence: 52%
“…To address this issue, we take advantage of the Meta-Kernel block to extract the meta features by dynamically learning the weights from the relative Cartesian coordinates and range values. As in [16], the Meta-Kernel is designed to effectively locate objects in LiDAR scans by exploiting the geometric information from the Cartesian coordinates. In this paper, we employ it to capture the spatial and temporal information for semantic segmentation.…”
Section: Feature Extractionmentioning
confidence: 99%
“…In contrast to the direct fusion method, our proposed range residual image efficiently represents multi-frame point cloud information, which can improve the accuracy and the speed of training and testing under the limited computing resources. Since the range residual image obtained from spherical projection may not effectively capture the local geometric structures, we take advantage of the Meta-Kernel operator [16] to extract the meta features by dynamically learning the weights from the relative Cartesian coordinates and range values. Thus, it reduces the inconsistency between the 2D range image coordinates input and Cartesian coordinates output.…”
Section: Introductionmentioning
confidence: 99%
“…Early approaches on 3D object detection from point cloud can be categorized into two classes. The first class of methods transform the point cloud into more compact representations, e.g., Birds-Eye-View (BEV) images [3,11,34], frontal-view range images [2,7,18], and volumetric features [14,43,51]. Yan et al [42] developed a sparse convolutional backbone to efficiently process the point clouds by encoding the point clouds into a 3D sparse tensor.…”
Section: Related Work 21 3d Object Detection From Point Cloudsmentioning
confidence: 99%