2020
DOI: 10.48550/arxiv.2010.00731
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar Fusion

Abstract: In this paper, we present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps. Automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous radial velocity measurements. However, there are factors that make the fusion of lidar and radar information challenging, such as the relatively low angular resolution of radar measurements, their sparsit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(15 citation statements)
references
References 33 publications
0
15
0
Order By: Relevance
“…This demonstrates that our proposed method can leverage multiple views much more effectively than previous multi-view end-to-end methods. Finally, we show that our method, with only LiDAR information, is able to outperform multi-sensor methods like LiRANet [5] (which uses RADAR in addition to LiDAR) and LC-MV [10] (which uses camera images in addition to LiDAR).…”
Section: Comparison To the State-of-the-artmentioning
confidence: 86%
See 4 more Smart Citations
“…This demonstrates that our proposed method can leverage multiple views much more effectively than previous multi-view end-to-end methods. Finally, we show that our method, with only LiDAR information, is able to outperform multi-sensor methods like LiRANet [5] (which uses RADAR in addition to LiDAR) and LC-MV [10] (which uses camera images in addition to LiDAR).…”
Section: Comparison To the State-of-the-artmentioning
confidence: 86%
“…These methods show that recent work on multi-modal predictions and the use of interaction graphs to model complex relationships can be easily extended to the framework of joint object detection and motion forecasting. [5] and [10] are recent multi-sensor methods that build on top of [4] by using radar and camera inputs respectively. These methods, by virtue of operating in BEV, lose out on high-resolution point information and are often limited by range of operation.…”
Section: Motion Forecastingmentioning
confidence: 99%
See 3 more Smart Citations