Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology 2019
DOI: 10.1145/3332165.3347921
|View full text |Cite
|
Sign up to set email alerts
|

Eye&Head

Abstract: Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and identify design principles for Eye&Head gaze interaction. We introduce three novel techniques that build on the distinction of head-supported versus eyes-only gaze, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 79 publications
(18 citation statements)
references
References 57 publications
1
15
0
Order By: Relevance
“…We see similar results when comparing eye tracking to mouse input and again for head-tracking [18]. There is a prior study [48] that found a higher throughput compared to these prior experiments when using gaze; however, it can be challenging to make comparisons across studies and this work did not have any comparison to common modalities, which we utilize in our study. Unlike what we provide in our work, many prior studies, including these three discussed here [18,39,48] do not measure or report eye tracking calibration statistics in their reports; at best they report the manufacturer's stated eye tracking quality, but even that is rare.…”
Section: Introductionsupporting
confidence: 66%
See 1 more Smart Citation
“…We see similar results when comparing eye tracking to mouse input and again for head-tracking [18]. There is a prior study [48] that found a higher throughput compared to these prior experiments when using gaze; however, it can be challenging to make comparisons across studies and this work did not have any comparison to common modalities, which we utilize in our study. Unlike what we provide in our work, many prior studies, including these three discussed here [18,39,48] do not measure or report eye tracking calibration statistics in their reports; at best they report the manufacturer's stated eye tracking quality, but even that is rare.…”
Section: Introductionsupporting
confidence: 66%
“…There is a prior study [48] that found a higher throughput compared to these prior experiments when using gaze; however, it can be challenging to make comparisons across studies and this work did not have any comparison to common modalities, which we utilize in our study. Unlike what we provide in our work, many prior studies, including these three discussed here [18,39,48] do not measure or report eye tracking calibration statistics in their reports; at best they report the manufacturer's stated eye tracking quality, but even that is rare. In cases where the eye tracker performance is poor, but not reported on a participant level, it is impossible to know whether poor interaction efficiency is due in earnest to the experimental manipulation or simply the result of tracking errors.…”
Section: Introductionmentioning
confidence: 99%
“…Here, gaze is typically rendered as eyes-only when gaze shifts are below a threshold of 10-15 • , and coupled with head movement otherwise [Ruhland et al 2015]. Recent work also introduced a distinction of eyes-only versus head-supported gaze for point and dwell input [Sidenmark and Gellersen 2019b]. Other work has build on coordinated eye and head movement for estimation of gaze depth and target disambiguation in 3D interfaces [Mardanbegi et al 2019a,b].…”
Section: Related Workmentioning
confidence: 99%
“…It has also been proposed to use gaze for coarse-grained selection followed by head movement for subsequent confirmation [22,36] or refinement of positional input [20]. Other work has proposed techniques that leverage concurrent eye and head movement for interaction and target depth estimation [21,23,35]. The hands-free technique we implemented likewise combines head and eye tracking, however with the head tracked for cone-casting, and eye movement matched against the outline motion presented by candidate targets.…”
Section: Related Workmentioning
confidence: 99%