2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00957
|View full text |Cite
|
Sign up to set email alerts
|

Large Scale Interactive Motion Forecasting for Autonomous Driving : The Waymo Open Motion Dataset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
129
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 257 publications
(158 citation statements)
references
References 24 publications
0
129
0
Order By: Relevance
“…Nearly all work assumes an independent, per-agent output space, in which agent interactions cannot be explicitly captured. A few works are notable in describing joint interactions as output, either in an asymmetric [28,47] or symmetric way [18,35,41].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Nearly all work assumes an independent, per-agent output space, in which agent interactions cannot be explicitly captured. A few works are notable in describing joint interactions as output, either in an asymmetric [28,47] or symmetric way [18,35,41].…”
Section: Related Workmentioning
confidence: 99%
“…There has been a rich body of work on how to model agents' futures, their interactions, and the environment. However, there is little consensus to date on the best modeling choices for each component, and in popular benchmark challenge datasets [6,12,18,53], there is a surprisingly diverse set of solutions to this problem; for details see Section 2 and Table 2.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…As for sensor modalities, nuScenes [3] collected data with Radar, RGB camera, and LiDAR in a 360 • viewpoint; WoodScape [43] captured data with fisheye cameras; and A2D2 [12] provided extensive vehicle bus data including the steering wheel angle, throttle, and braking. Regarding data annotations, semantic labels in both images [7,15,30,36] and point cloud [2,14] were provided to enable semantic segmentation; 2D/3D box trajectories were offered [4,9] to facilitate tracking and prediction. In summary, existing datasets generally emphasized the data comprehensiveness in single-vehicle situations, but ignored the multi-vehicle collaborative self-driving scenarios.…”
Section: Related Workmentioning
confidence: 99%
“…Vehicle trajectory prediction is one of the main building blocks of a self-driving car, which forecasts how the future might unroll based on the scene, i.e., the road structure, and the traffic participants. State-of-the-art models are commonly trained and evaluated on datasets collected from few cities [14,19,23]. While their evaluation has shown impressive performance, i.e., almost no off-road prediction, their generalization to other types of possible scenes e.g., other cities, remains unknown.…”
Section: Introductionmentioning
confidence: 99%