2021
DOI: 10.1109/lra.2020.3047793
|View full text |Cite
|
Sign up to set email alerts
|

LaserFlow: Efficient and Probabilistic Object Detection and Motion Forecasting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(11 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…Sensor Fusion Methods for Object Detection and Motion Forecasting: The majority of the sensor fusion works consider perception tasks, e.g. object detection [22,12,66,7,44,31,34,61,33,37] and motion forecasting [36,5,35,63,6,19,38]. They operate on multi-view LiDAR, e.g.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Sensor Fusion Methods for Object Detection and Motion Forecasting: The majority of the sensor fusion works consider perception tasks, e.g. object detection [22,12,66,7,44,31,34,61,33,37] and motion forecasting [36,5,35,63,6,19,38]. They operate on multi-view LiDAR, e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Prior works in the field of sensor fusion have mostly focused on the perception aspect of driving, e.g. 2D and 3D object detection [22,12,66,9,44,31,34,61,33,37], motion forecasting [22,36,5,35,63,6,19,38,32,9], and depth estimation [24,60,61,33]. These methods focus on learning a state representation that captures the geometric and semantic information of the 3D scene.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…This prediction is achieved using information from their current motion such as velocity, acceleration or previous trajectory, and some contextual elements about the scene. This context can take various forms, ranging from raw LiDAR point clouds [7], [8], [9], [10], [25], [26], [27] and RGB camera stream [25], [28], [29], [30] to more semantic representations including High-Definition maps [2], [5], [6], [7], [8], [11], [31], or detections of other agents and their motion information [5], [6], [12], [14]. Recent trajectory prediction models are designed to produce multiple forecasts, attempting to capture the multiplicity of possible futures [12], [32], [33].…”
Section: Related Workmentioning
confidence: 99%
“…Over the last years, the paradigm has shifted towards learning-based models [5], [6]. These models generally operate over two sources of information: (1) scene information about the agent's surroundings, e.g., LiDAR point clouds [7], [8], [9], [10] or bird-eye-view rasters [5], [6], [7], [8], [11], and (2) motion cues of the agent, e.g., its instantaneous velocity, acceleration, and yaw rate [5], [12] or its previous trajectory [6], [13], [14], [15]. But despite being trained with diverse modalities as input, we remark that, in practice, these models tend to base their predictions on only one modality: the previous dynamics of the agent.…”
Section: Introductionmentioning
confidence: 99%