2019 IEEE Intelligent Vehicles Symposium (IV) 2019
DOI: 10.1109/ivs.2019.8814287
|View full text |Cite
|
Sign up to set email alerts
|

Driving Behavior Modeling Based on Hidden Markov Models with Driver's Eye-Gaze Measurement and Ego-Vehicle Localization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 16 publications
0
7
0
Order By: Relevance
“…To effectively predict how a driver will react to an RtI, the ADS must understand the driver's ability to undertake it (Koesdwiady et al 2016, Yi et al 2019a. This is often accomplished via vehicle-oriented (e.g., acceleration or driving path) or driver-oriented (e.g., eye closure or hand position) approaches (Hecht et al 2018, Akai et al 2019. Given the substantial effect of driver behavior on roadway safety (Brookhuis andDe Waard 2010, Wang et al 2020), predictive models have been a focus of recent DSM research (Torres, Ohashi, and Pessin 2019;Yi et al 2019b), with a few notable examples adopting a Bayesian perspective (Agamennoni, Nieto, and Nebot 2011;Straub, Zheng, and Fisher 2014).…”
Section: Driver-state Monitoringmentioning
confidence: 99%
“…To effectively predict how a driver will react to an RtI, the ADS must understand the driver's ability to undertake it (Koesdwiady et al 2016, Yi et al 2019a. This is often accomplished via vehicle-oriented (e.g., acceleration or driving path) or driver-oriented (e.g., eye closure or hand position) approaches (Hecht et al 2018, Akai et al 2019. Given the substantial effect of driver behavior on roadway safety (Brookhuis andDe Waard 2010, Wang et al 2020), predictive models have been a focus of recent DSM research (Torres, Ohashi, and Pessin 2019;Yi et al 2019b), with a few notable examples adopting a Bayesian perspective (Agamennoni, Nieto, and Nebot 2011;Straub, Zheng, and Fisher 2014).…”
Section: Driver-state Monitoringmentioning
confidence: 99%
“…Since HMM is sometimes inefficient because it cannot integrate past and input data, autoregressive input and output HMM (AIOHMM) were proposed to overcome the HMM limitation and obtain a better driving behaviour model based on driver visual direction, gas pedal, and steering wheel. It occurred that the AIOHMM driver's model had the best precision with five hidden states for different tasks (turn left, turn right, go straight, follow participant) [55]. However, HMM is not enough to provide a better driving prediction model for all drivers due to individual driving behaviour.…”
Section: Hidden Markov Model (Hmm) For Driving Behaviourmentioning
confidence: 99%
“…For this task, the approximate direction of drivers' gaze is often used unless eye-tracking data is available as in [157]. The processing pipeline for obtaining gaze features usually includes face detection and tracking, followed by facial landmark detection, extraction of gaze zones [79], [84], [158], gaze duration, frequency, and blinks [79], [84].…”
Section: Driver Maneuver Recognition and Predictionmentioning
confidence: 99%
“…Besides discriminative models, temporal modeling that fits the data more naturally has also been applied. Jain et al [159] and Akai et al [157] propose auto-regressive input-output Hidden Markov Models (HMMs) to classify driver's actions given driver gaze and vehicle dynamics. Recurrent networks are also effective for multi-modal data [160] but lack the explainability of HMMs.…”
Section: Driver Maneuver Recognition and Predictionmentioning
confidence: 99%