2020
DOI: 10.1049/iet-its.2020.0087
|View full text |Cite
|
Sign up to set email alerts
|

Predicting driver behaviour at intersections based on driver gaze and traffic light recognition

Abstract: This work introduces and evaluates a model for predicting driver behaviour, namely turns or proceeding straight, at traffic light intersections from driver three-dimensional gaze data and traffic light recognition. Based on vehicular data, this work relates the traffic light position, the driver's gaze, head movement, and distance from the centre of the traffic light to build a model of driver behaviour. The model can be used to predict the expected driver manoeuvre 3 to 4 s prior to arrival at the intersectio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…Until recently, the majority of driving observation frameworks comprised a manual feature extraction step followed by a classification module (for a thorough overview see [21]). The constructed feature vectors are often derived from hand-and body-pose [2], [3], [6], [7], [38], [39], facial expressions and eye-based input [40], [41], and head pose [42], [43], but also foot dynamics [44], detected objects [6], and physiological signals [45] have been considered. Classification approaches are fairly similar to the ones used in standard video classification, with LSTMs [3], [4], SVMs [2], [46], random forests [47] or HMMs [4], and graph neural networks [7], [48] being popular choices.…”
Section: Related Work a Driver Action Recognitionmentioning
confidence: 99%
“…Until recently, the majority of driving observation frameworks comprised a manual feature extraction step followed by a classification module (for a thorough overview see [21]). The constructed feature vectors are often derived from hand-and body-pose [2], [3], [6], [7], [38], [39], facial expressions and eye-based input [40], [41], and head pose [42], [43], but also foot dynamics [44], detected objects [6], and physiological signals [45] have been considered. Classification approaches are fairly similar to the ones used in standard video classification, with LSTMs [3], [4], SVMs [2], [46], random forests [47] or HMMs [4], and graph neural networks [7], [48] being popular choices.…”
Section: Related Work a Driver Action Recognitionmentioning
confidence: 99%