2017
DOI: 10.1016/j.ijar.2017.04.007
|View full text |Cite
|
Sign up to set email alerts
|

A Bayesian hierarchy for robust gaze estimation in human–robot interaction

Abstract: In this text, we will present a probabilistic solution for robust gaze estimation in the context of human-robot interaction. Gaze estimation, in the sense of continuously assessing gaze direction of an interlocutor so as to determine his/her focus of visual attention, is important in several important computer vision applications, such as the development of non-intrusive gaze-tracking equipment for psychophysical experiments in neuroscience, specialised telecommunication devices, video surveillance, human-comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…While the computer vision community has made significant progress in the area of gaze estimation, the application is typically limited to settings where the head pose is constrained, and the person is close to the camera; examples include phone [16] and tablet [15] use, as well as screen-based settings [23]. However, scenarios that are typically encountered in humanrobot interactions have seen little attention [18,22,20].…”
Section: Reference Imagesmentioning
confidence: 99%
“…While the computer vision community has made significant progress in the area of gaze estimation, the application is typically limited to settings where the head pose is constrained, and the person is close to the camera; examples include phone [16] and tablet [15] use, as well as screen-based settings [23]. However, scenarios that are typically encountered in humanrobot interactions have seen little attention [18,22,20].…”
Section: Reference Imagesmentioning
confidence: 99%
“…For example, customers could observe service robot head movements (e.g. nod) as a sign of interest or agreement in service interactions (Lanillos et al, 2017).…”
Section: X3: Empathymentioning
confidence: 99%
“…The correlation between head pose and gaze has also been exploited in [23]. More recently, [24] combined head and eye features to estimate the gaze direction using an RGB-D camera. The method still requires that both eyes are visible.…”
Section: Related Workmentioning
confidence: 99%