2020
DOI: 10.1609/aaai.v34i07.6933
|View full text |Cite
|
Sign up to set email alerts
|

PI-RCNN: An Efficient Multi-Sensor 3D Object Detector with Point-Based Attentive Cont-Conv Fusion Module

Abstract: LIDAR point clouds and RGB-images are both extremely essential for 3D object detection. So many state-of-the-art 3D detection algorithms dedicate in fusing these two types of data effectively. However, their fusion methods based on Bird's Eye View (BEV) or voxel format are not accurate. In this paper, we propose a novel fusion approach named Point-based Attentive Cont-conv Fusion(PACF) module, which fuses multi-sensor features directly on 3D points. Except for continuous convolution, we additionally add a Poin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
79
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 159 publications
(94 citation statements)
references
References 21 publications
0
79
0
Order By: Relevance
“…Additionally, the fusion methods based on BEV or voxel format are not accurate enough. Thus, PI-RCNN [74] proposes a novel fusion method named Point-based Attentive Cont-conv Fusion module to fuse multisensor features directly on 3D points. Except for continuous convolution, Point-Pooling and Attentive Aggregation are used to fuse features expressively.…”
Section: Multi-sensor Fusion-based 3d Object Detection Methodsmentioning
confidence: 99%
“…Additionally, the fusion methods based on BEV or voxel format are not accurate enough. Thus, PI-RCNN [74] proposes a novel fusion method named Point-based Attentive Cont-conv Fusion module to fuse multisensor features directly on 3D points. Except for continuous convolution, Point-Pooling and Attentive Aggregation are used to fuse features expressively.…”
Section: Multi-sensor Fusion-based 3d Object Detection Methodsmentioning
confidence: 99%
“…Instead, some methods [10], [13], [36] perform the point-wise detection on the point-wise features provided by the PointNet++ [16] or graph neural networks [37]. The point-wise point cloud features can also be augmented with the camera image features by projecting the points onto the image plane [38], [39]. To improve the orientation coverage of the cubic anchor, a novel spherical anchor for point cloud space is proposed in STD [11], but the box recall is still lagging behind that of the mapview RPN in the voxel-based methods [14], [15].…”
Section: D Object Detection Based On Pointsmentioning
confidence: 99%
“…The LiDAR points in a frustum proposal are used to generate the instance segmentation and 3D bounding boxes by PointNet++ [14]. PI-RCNN [22] uses a point-based attentive contfuse module to fuse features from multiple sensors. Pseudo-LiDAR++ [23] uses stereo camera images with LiDAR points to generate dense pseudo point clouds and enhance the performance of 3D detectors.…”
Section: Related Workmentioning
confidence: 99%
“…Our method is at the same level of performance as the two-stage detector PI-RCNN [22] and the one-stage anchor-based detector SCNet [39]. We visualize the detect results in Figure 4.…”
Section: Experiments On the Kitti Validation Setmentioning
confidence: 99%