CHI '13 Extended Abstracts on Human Factors in Computing Systems 2013
DOI: 10.1145/2468356.2479610
|View full text |Cite
|
Sign up to set email alerts
|

Gaze-supported foot interaction in zoomable information spaces

Abstract: When working with zoomable information spaces, we can distinguish complex tasks into primary and secondary tasks (e.g., pan and zoom). In this context, a multimodal combination of gaze and foot input is highly promising for supporting manual interactions, for example, using mouse and keyboard. Motivated by this, we present several alternatives for multimodal gaze-supported foot interaction in a computer desktop setup for pan and zoom. While our eye gaze is ideal to indicate a user's current point of interest a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(15 citation statements)
references
References 8 publications
(17 reference statements)
0
15
0
Order By: Relevance
“…Other researchers have proposed techniques that take advantage of other potentially useful information from the user, the environment, or mobile devices such as head orientation [50], gaze location [47], foot taps [18], or the orientation of a smart device [56,57].…”
Section: Pointing With Advanced Sensorsmentioning
confidence: 99%
“…Other researchers have proposed techniques that take advantage of other potentially useful information from the user, the environment, or mobile devices such as head orientation [50], gaze location [47], foot taps [18], or the orientation of a smart device [56,57].…”
Section: Pointing With Advanced Sensorsmentioning
confidence: 99%
“…This is motivated partly to give additional or alternative modalities for complex interfaces, and partly because input through gaze or foot would have meaningful applications for certain types of disabilities. However, combining gaze and foot input appears to have been rare (Göbel et al, 2013;Klamka et al, 2015). Klamka et al (2015) also worked with zoom and pan.…”
Section: Related Workmentioning
confidence: 99%
“…Other examples of works that use pedals include controlling a 3D modelling application [Balakrishnan et al 1999], text entry , supporting gaze input [Göbel et al 2013] and toggling the mode of operation of a piano keyboard [Mohamed and Fels 2002]. Zhong et al implemented a pivoting pedal that rotates around the heel in addition to up and down [Zhong et al 2011].…”
Section: Input Sensingmentioning
confidence: 99%
“…Garcia et al studied learning effects of users interacting with this device [Garcia and Vu 2009], [Garcia and Vu 2011]. Research prototypes of foot joysticks include the works of Springer and Siebes [Springer and Siebes 1996] and Göbel et al [Göbel et al 2013].…”
Section: Input Sensingmentioning
confidence: 99%
See 1 more Smart Citation