Proceedings of the 2016 ACM Conference on Designing Interactive Systems 2016
DOI: 10.1145/2901790.2901867
|View full text |Cite
|
Sign up to set email alerts
|

AmbiGaze

Abstract: Eye tracking offers many opportunities for direct device control in smart environments, but issues such as the need for calibration and the Midas touch problem make it impractical. In this paper, we propose AmbiGaze, a smart environment that employs the animation of targets to provide users with direct control of devices by gaze only through smooth pursuit tracking. We propose a design space of means of exposing functionality through movement and illustrate the concept through four prototypes. We evaluated the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 60 publications
(16 citation statements)
references
References 24 publications
0
16
0
Order By: Relevance
“…Early works introduced the principle as enabling "pointing without a pointer" and "motion-pointing" [9,37,38], inspired by perceptual control theory [19] and naturally harmonic human motor behaviour [11]. Recent work adopted motion correlation for gazeand gesture-based interaction [5,8,25,33,34], leveraging human natural ability to smoothly follow motion with their eyes and hands. This prior body of work, recently reviewed in depth by Velloso et al [32], demonstrated advantages of motion correlation: the high discoverability of the available gestures as they are continuously displayed [5,9]; implicit coupling of input and output coordinate spaces without need for calibration [8,34]; usability with feedback modalities that are not suited for pointing [33,38]; no split of attention between a cursor and a target [9]; and the capacity for multi-user input [5].…”
Section: Motion Correlation As Input Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Early works introduced the principle as enabling "pointing without a pointer" and "motion-pointing" [9,37,38], inspired by perceptual control theory [19] and naturally harmonic human motor behaviour [11]. Recent work adopted motion correlation for gazeand gesture-based interaction [5,8,25,33,34], leveraging human natural ability to smoothly follow motion with their eyes and hands. This prior body of work, recently reviewed in depth by Velloso et al [32], demonstrated advantages of motion correlation: the high discoverability of the available gestures as they are continuously displayed [5,9]; implicit coupling of input and output coordinate spaces without need for calibration [8,34]; usability with feedback modalities that are not suited for pointing [33,38]; no split of attention between a cursor and a target [9]; and the capacity for multi-user input [5].…”
Section: Motion Correlation As Input Methodsmentioning
confidence: 99%
“…It was first demonstrated in conventional desktop settings for selection of animated widgets by matching mouse movement [9,37,38]. It has since been studied for spontaneous touchless interaction with public displays [5,35], input "at a glance" on smartwatches [8], and control of diverse types of devices in a smart environment [33]. Most of these works focused on gaze as input modality, leveraging specific properties of human smooth pursuit eye movement,…”
Section: Motion Correlation As Input Methodsmentioning
confidence: 99%
“…Standard input methods are dwell-time activation, (i.e., looking at a target for a set time, for instance 500 ms, e.g., Ware & Mikaelian, 1987); stroke activation (i.e., looking in one or several directions, in a consecutive order, with a saccade in between, e.g., Drewes & Schmidt, 2007); and pursuit activation (i.e., following a smoothly moving target area, e.g., Vidal, Bulling, & Gellersen, 2013). Current gazeinteraction research focuses on challenges and potentials in smart-phone interaction (e.g., Rozado, Moreno, Agustin, Rodriguez, & Varona, 2015), smart-watches (e.g., Hansen et al, 2016), ubiquitous displays (e.g., Velloso, Wirth, Weichel, Esteves, & Gellersen, 2016) and head-mounted displays (e.g., Itoh & Klinker, 2014).…”
Section: Related Workmentioning
confidence: 99%
“…Works that employ this approach consider the moving target to be gazed at if the correlation between the target's positions and the user's gaze are above a certain threshold. Similar to previous implementations of Pursuits [Esteves et al 2015;Khamis et al 2015Khamis et al , 2017Khamis et al , 2016aKosch et al 2018;Velloso et al 2017Velloso et al , 2016Vidal et al 2013], we use gaze estimates obtained using an uncalibrated eye tracker. We used the correlation coefficient to see how far the eye movements deviate from the target's trajectory while it is hiding.…”
Section: Quantitative Datamentioning
confidence: 99%