2017
DOI: 10.48550/arxiv.1711.01458
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DDD17: End-To-End DAVIS Driving Dataset

Abstract: Event cameras, such as dynamic vision sensors (DVS), and dynamic and activepixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events. The APS stream is a sequence of standard grayscale global-shutter image sensor frames. The DVS events represent brightness changes occurring at a particular moment, with a jitter of about a millisecond under most lighting conditions. They have a dynam… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(29 citation statements)
references
References 7 publications
0
29
0
Order By: Relevance
“…DDD17 [60] contains 40 different driving sequences of event data captured by a DVS camera. While the dataset provides both grayscale images and event data, it does not provide semantic segmentation labels.…”
Section: Experiments a Experimental Setup 1) Datasetmentioning
confidence: 99%
“…DDD17 [60] contains 40 different driving sequences of event data captured by a DVS camera. While the dataset provides both grayscale images and event data, it does not provide semantic segmentation labels.…”
Section: Experiments a Experimental Setup 1) Datasetmentioning
confidence: 99%
“…According to the image-like transformation, Alonso et al [57] introduced a six-channels event representation and constructed a semantic segmentation model Ev-SegNet on an extended event dataset DDD17 [58], whose semantic labels are generated by a pre-trained model on Cityscapes and only contain 6 major categories. In contrast, our models are trained with the ground-truth labels of Cityscapes and perform semantic segmentation in all 19 object classes, so that the perception component can deliver a sufficiently dense and finegrained scene understanding result for upper-level assistance systems.…”
Section: Event-based Visionmentioning
confidence: 99%
“…In [5], 12 hours of driving sequences are obtained during day and night time. Various car information, such as vehicle speed, GPS position, driver steering angle, are associated to the dataset.…”
Section: Event-based Datasets For Visual Odometry Optical Flow and St...mentioning
confidence: 99%
“…As shown by Tab. 1, the GEN1 Automotive Detection Dataset is 3 time larger than the DDD17 [5] dataset in terms of hours and has about 22 times more labels than the [31] pedestrian dataset. In terms of number of labels, the [4] dataset is the second largest one, with approximately 2.5 less labels than ours.…”
Section: Analysis and Statisticsmentioning
confidence: 99%