2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2016
DOI: 10.1109/cvprw.2016.14
|View full text |Cite
|
Sign up to set email alerts
|

DR(eye)VE: A Dataset for Attention-Based Tasks with Applications to Autonomous and Assisted Driving

Abstract: Autonomous and assisted driving are undoubtedly hot topics in computer vision. However, the driving task is extremely complex and a deep understanding of drivers' behavior is still lacking. Several researchers are now investigating the attention mechanism in order to define computational models for detecting salient and interesting objects in the scene. Nevertheless, most of these models only refer to bottom up visual saliency and are focused on still images. Instead, during the driving experience the temporal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
76
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 96 publications
(78 citation statements)
references
References 34 publications
0
76
0
Order By: Relevance
“…The fourth dataset has data from DR(eye)VE [25], which consists of images from real life driving. The data is comprised of 74 videos with sequences of five minutes each with annotation of drivers' gaze fixations.…”
Section: Evaluation Methodsmentioning
confidence: 99%
“…The fourth dataset has data from DR(eye)VE [25], which consists of images from real life driving. The data is comprised of 74 videos with sequences of five minutes each with annotation of drivers' gaze fixations.…”
Section: Evaluation Methodsmentioning
confidence: 99%
“…There are several datasets [20], [24], [7], [18], [1] can be used for driver's attention prediction, but most of them are either restricted to limited settings or not publicly available. To the best of our knowledge, Dr(eye)ve [1] is the only public on-road driving dataset for the driver's attention prediction task. It consists of 555,000 frames divided into 74 video sequences.…”
Section: A Driver's Attention Predictionmentioning
confidence: 99%
“…Upperbound. We estimate importance scores for all the object proposals (tracklinks), so the final results depend on 1 We use 3-fold cross validation instead of 10-fold due to not enough data. 2 Since we do not have the object-category annotations.…”
Section: Baselinesmentioning
confidence: 99%
“…Drivers' gaze behavior has been studied as a proxy for their attention. Recently, a large driver attention dataset of routine driving [1] has been introduced and neural networks [21,25] have been trained end-to-end to estimate driver attention, mostly in lane-following and car-following situations. Nonetheless, datasets and prediction models for driver attention in rare and critical situations are still needed.…”
Section: Introductionmentioning
confidence: 99%