2021
DOI: 10.48550/arxiv.2103.10039
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

RangeDet:In Defense of Range View for LiDAR-based 3D Object Detection

Abstract: In this paper, we propose an anchor-free single-stage LiDAR-based 3D object detector -RangeDet. The most notable difference with previous works is that our method is purely based on the range view representation. Compared with the commonly used voxelized or Bird's Eye View (BEV) representations, the range view representation is more compact and without quantization error. Although there are works adopting it for semantic segmentation, its performance in object detection is largely behind voxelized or BEV count… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(25 citation statements)
references
References 36 publications
0
24
0
Order By: Relevance
“…Without bells and whistles, our approach works better than PV-RCNN [39]. Furthermore, we compare our framework on the vehicle class for different distances with state-of-the-art methods, including StarNet [30], PointPillars [19], RCD [1], Det3D [63], RangeDet [10] and PV-RCNN [39]. As shown in Table 2, M3DETR outperforms PV-RCNN significantly in both LEVEL 1 and LEVEL 2 difficulty levels across all distances, demonstrating the effectiveness of the newly proposed framework.…”
Section: Resultsmentioning
confidence: 99%
“…Without bells and whistles, our approach works better than PV-RCNN [39]. Furthermore, we compare our framework on the vehicle class for different distances with state-of-the-art methods, including StarNet [30], PointPillars [19], RCD [1], Det3D [63], RangeDet [10] and PV-RCNN [39]. As shown in Table 2, M3DETR outperforms PV-RCNN significantly in both LEVEL 1 and LEVEL 2 difficulty levels across all distances, demonstrating the effectiveness of the newly proposed framework.…”
Section: Resultsmentioning
confidence: 99%
“…PIXOR (Yang, Luo, and Urtasun 2018) assigns all pixels inside ground truth bounding boxes as positive samples in the BEV feature map. Similarly, some range-view-based methods assign pixels on range view maps that are inside 3D ground truth boxes as positive samples (Fan et al 2021;Meyer et al 2019;Bewley et al 2020). Multiple anchor-free detection networks formulate the detection problem as a keypoint detection problem (Ge et al 2020;Sun et al 2021;Yin, Zhou, and Krahenbuhl 2021b).…”
Section: Anchor-free/anchor-based Lidar Detectormentioning
confidence: 99%
“…Object detection from point clouds has become a practical solution to robotics vision, especially in autonomous driving applications. Like the detection methods on 2D images, the 3D detection methods can also be divided into two groups: single-stage (Ge et al 2020;Zhou and Tuzel 2018;Yan, Mao, and Li 2018;He et al 2020;Zheng et al 2021;Lang et al 2019;Bewley et al 2020;Fan et al 2021;Chen et al 2020a) and two-stage (Shi, Wang, and Li 2019;Yang et al 2019;Qi et al 2017;Shi et al 2020c;Li, Wang, and Wang 2021;Shi et al 2021;Deng et al 2021a;Yin, Zhou, and Krahenbuhl 2021a;Sun et al 2021), in terms of the model structure. The two-stage methods usually show better accuracy (Shi, Wang, and Li 2019;Shi et al 2021;Deng et al 2021a) in the classification confidence and the box regression, than the single-stage methods.…”
Section: Introductionmentioning
confidence: 99%
“…2.1 Non-streaming lidar perception Most lidar perception architectures take inspiration from the image perception literature [23,17,16]. Some single-stage methods typically convert the point cloud into a bird's-eye view image [38,15,31] or a range view image [20,10] and perform detection in those views. The most common paradigm is to convert the lidar point cloud into a BEV image as it offers several advantages like a lack of scale ambiguity, a near lack of occlusion, the ease of fusing HD maps [30] and performing simultaneous detection and trajectory predictions [4,18].…”
Section: Related Workmentioning
confidence: 99%
“…While such self-driving cars typically deploy a wide variety of sensors lidars play a key role due to the accurate range information provided. Driven in part by the availability of benchmark datasets [12,3,26], the last decade has seen tremendous progress in lidar based 3D object detection [38,15,31,20,10]. However, these methods all ignore the fact that most lidar sensors scan the scene sequentially as the lidar rotates around the z-axis.…”
Section: Introductionmentioning
confidence: 99%