2017 IEEE Intelligent Vehicles Symposium (IV) 2017
DOI: 10.1109/ivs.2017.7995833
|View full text |Cite
|
Sign up to set email alerts
|

Learning where to attend like a human driver

Abstract: Despite the advent of autonomous cars, it's likelyat least in the near future -that human attention will still maintain a central role as a guarantee in terms of legal responsibility during the driving task. In this paper we study the dynamics of the driver's gaze and use it as a proxy to understand related attentional mechanisms. First, we build our analysis upon two questions: where and what the driver is looking at? Second, we model the driver's gaze by training a coarse-tofine convolutional network on shor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 29 publications
(31 citation statements)
references
References 28 publications
0
31
0
Order By: Relevance
“…This may happen with the driver in full control and getting active assistance from the robot, or the robot is in partial or full control and human drivers are passive observers "ready" to take over as deemed necessary by the machine or human [3], [4]. In the full spectrum from manual to autonomous mode, modeling the dynamics of driver's gaze is of particular interest because, if and how the driver is monitoring the driving environment is vital for driver assistance in manual mode [5], for takeover requests in highly automated mode [6] and for semantic perception of the surround in fully autonomous mode [7], [8].…”
Section: Introductionmentioning
confidence: 99%
“…This may happen with the driver in full control and getting active assistance from the robot, or the robot is in partial or full control and human drivers are passive observers "ready" to take over as deemed necessary by the machine or human [3], [4]. In the full spectrum from manual to autonomous mode, modeling the dynamics of driver's gaze is of particular interest because, if and how the driver is monitoring the driving environment is vital for driver assistance in manual mode [5], for takeover requests in highly automated mode [6] and for semantic perception of the surround in fully autonomous mode [7], [8].…”
Section: Introductionmentioning
confidence: 99%
“…Human driver's attention provides important visual cues for driving, and thus efforts to mimic human driver's attention have increasingly been introduced. Recently, several deep neural models have been utilized to predict where human drivers should pay attention [21,25]. Most of existing models were trained and tested on the DR(eye)VE dataset [1].…”
Section: Driver Attention Datasetsmentioning
confidence: 99%
“…To our knowledge, [21] and [25] are the two deep neural models that use dash camera videos alone to predict human driver's gaze. They demonstrated similar results and were shown to surpass other deep learning models or traditional models that predict human gaze in non-driving-specific contexts.…”
Section: Training and Evaluation Detailsmentioning
confidence: 99%
See 2 more Smart Citations