2023
DOI: 10.1109/tpami.2022.3150906
|View full text |Cite
|
Sign up to set email alerts
|

ROAD: The Road Event Awareness Dataset for Autonomous Driving

Abstract: Humans drive in a holistic fashion which entails, in particular, understanding dynamic road events and their evolution. Injecting these capabilities in autonomous vehicles can thus take situational awareness and decision making closer to human-level performance. To this purpose, we introduce the ROad event Awareness Dataset (ROAD) for Autonomous Driving, to our knowledge the first of its kind. ROAD is designed to test an autonomous vehicle's ability to detect road events, defined as triplets composed by an act… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 40 publications
(23 citation statements)
references
References 87 publications
(164 reference statements)
0
17
0
Order By: Relevance
“…Among the datasets dedicated to autonomous vehicles, the vast majority are for autonomous driving in road environments and focus on object detection and scene segmentation. We can divide them into two main families [ 3 , 24 , 25 ]: (1) Datasets without ground truth data (images as a single modality), and (2) datasets with ground truth data (as a multimodality). Some of them are mono-modal (e.g., based on camera as single modality), and others are multi-modal (e.g., multisensor fusion system-based camera and LiDAR).…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Among the datasets dedicated to autonomous vehicles, the vast majority are for autonomous driving in road environments and focus on object detection and scene segmentation. We can divide them into two main families [ 3 , 24 , 25 ]: (1) Datasets without ground truth data (images as a single modality), and (2) datasets with ground truth data (as a multimodality). Some of them are mono-modal (e.g., based on camera as single modality), and others are multi-modal (e.g., multisensor fusion system-based camera and LiDAR).…”
Section: Related Workmentioning
confidence: 99%
“…The Road event Awareness Dataset for Autonomous Driving (ROAD) [ 25 ] includes 22 videos (each 8 min long) with 122 K of annotated frames, and 560 K bounding boxes of sensing with 1.7 M individual labels. ROAD is a dataset that was designed to test the situational awareness capabilities of a robot car.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Domain-specific, large-scale, and diverse datasets can fuel further advances in supervised learning. In the fast-growing field of autonomous driving, the datasets BDD-100K [26], NuScenes [27], KAIST multi-spectral driving dataset [28], KITTI [29], ROAD [54], and A2D2 [55] have proven to be of great value for computer vision tasks like object classification, object detection, and scene segmentation.…”
Section: Driving Datasetsmentioning
confidence: 99%
“…Perception systems for AV/IV can be understood as a process that interprets the data provided by the sensors in order to understand the surrounding environment, thus contributing to safer decision-making. An important item in perception systems is the object classification part, which is currently dominated by deep network (DN) architectures [4]- [7].…”
Section: Introductionmentioning
confidence: 99%