2020
DOI: 10.48550/arxiv.2008.11901
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-View Fusion of Sensor Data for Improved Perception and Prediction in Autonomous Driving

Abstract: We present an end-to-end method for object detection and trajectory prediction utilizing multi-view representations of LiDAR returns. Our method builds on a state-of-the-art Bird's-Eye View (BEV) network that fuses voxelized features from a sequence of historical LiDAR data as well as rasterized highdefinition map to perform detection and prediction tasks. We extend the BEV network with additional LiDAR Range-View (RV) features that use the raw LiDAR information in its native, non-quantized representation. The… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(38 citation statements)
references
References 30 publications
0
38
0
Order By: Relevance
“…Notably, our method shows a ∼ 15% improvement on pedestrian detection, a ∼ 40% improvement on bike detection, and a ∼ 30% improvement on motion forecasting of bikes, as compared to the best BEV-only MultiXNet. Next, we compare our method to another recent multi-view method L-MV [10]. As shown in Table 1, our method outperforms L-MV [10] on all classes by a large margin on both detection and forecasting.…”
Section: Comparison To the State-of-the-artmentioning
confidence: 99%
See 4 more Smart Citations
“…Notably, our method shows a ∼ 15% improvement on pedestrian detection, a ∼ 40% improvement on bike detection, and a ∼ 30% improvement on motion forecasting of bikes, as compared to the best BEV-only MultiXNet. Next, we compare our method to another recent multi-view method L-MV [10]. As shown in Table 1, our method outperforms L-MV [10] on all classes by a large margin on both detection and forecasting.…”
Section: Comparison To the State-of-the-artmentioning
confidence: 99%
“…Recently, [10] proposed a multi-view approach for the joint task. In this method, the authors proposed fusing a single-frame RV projection with multiple frames of BEV projection, which improves object detection performance.…”
Section: Lidar Representationmentioning
confidence: 99%
See 3 more Smart Citations