2019
DOI: 10.3390/s19061287
|View full text |Cite
|
Sign up to set email alerts
|

Continuous Driver’s Gaze Zone Estimation Using RGB-D Camera

Abstract: The driver gaze zone is an indicator of a driver’s attention and plays an important role in the driver’s activity monitoring. Due to the bad initialization of point-cloud transformation, gaze zone systems using RGB-D cameras and ICP (Iterative Closet Points) algorithm do not work well under long-time head motion. In this work, a solution for a continuous driver gaze zone estimation system in real-world driving situations is proposed, combining multi-zone ICP-based head pose tracking and appearance-based gaze e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 30 publications
(18 citation statements)
references
References 36 publications
0
18
0
Order By: Relevance
“…Jah and Busso [2] use only head pose in their method. Wang et al [16] combine depth images of the head and RGB images of eyes, but gaze estimation for eye images is unstable and only works with frontal images under ideal conditions. In [5], Yoon et al collect a dataset comprising of images in daytime/nighttime, images with and without eyeglasses using two NIR cameras and NIR lights.…”
Section: A Gaze Estimationmentioning
confidence: 99%
“…Jah and Busso [2] use only head pose in their method. Wang et al [16] combine depth images of the head and RGB images of eyes, but gaze estimation for eye images is unstable and only works with frontal images under ideal conditions. In [5], Yoon et al collect a dataset comprising of images in daytime/nighttime, images with and without eyeglasses using two NIR cameras and NIR lights.…”
Section: A Gaze Estimationmentioning
confidence: 99%
“…Related fields to gaze zone estimation, such as head pose estimation [24] and gaze tracking [4], have a long history in computer vision. Many apply Bayesian filtering achieving reasonable results 1 https://github.com/lstappen/xaware [13,47]. However, most systems designed for controlled environments are not robust enough for the use in the context of humanrobot interaction and driving assistance systems.…”
Section: Related Workmentioning
confidence: 99%
“…For example, recent interactive systems attempt to detect user intents expressed in the form of gestures or voice commands using signals from various sensing devices [ 1 , 2 , 3 ]. More immersive and natural ways to capture user intent include face orientation estimation [ 4 , 5 , 6 ], body tracking [ 7 , 8 , 9 ], and estimation of gaze direction [ 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 ]. For example, the authors of [ 4 , 5 , 6 ] proposed to estimate a driver’s face orientation with color or depth images taken in a specific environment, such as a vehicle, with the aim of preventing traffic accidents.…”
Section: Introductionmentioning
confidence: 99%
“…More immersive and natural ways to capture user intent include face orientation estimation [ 4 , 5 , 6 ], body tracking [ 7 , 8 , 9 ], and estimation of gaze direction [ 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 ]. For example, the authors of [ 4 , 5 , 6 ] proposed to estimate a driver’s face orientation with color or depth images taken in a specific environment, such as a vehicle, with the aim of preventing traffic accidents. Attempts have been made to detect a human activity by recognizing the joints of a human body with several Kinects for an interactive virtual training environment [ 8 ].…”
Section: Introductionmentioning
confidence: 99%