Proceedings of the Fifth International ACM Conference on Assistive Technologies 2002
DOI: 10.1145/638249.638272
|View full text |Cite
|
Sign up to set email alerts
|

Zooming interfaces!

Abstract: This paper quantifies the benefits and usability problems associated with eye-based pointing direct interaction on a standard graphical user interface. It shows where and how, with the addition of a second supporting modality, the typically poor performance and subjective assessment of eye-based pointing devices can be improved to match the performance of other assistive technology devices. It shows that target size is the overriding factor affecting device performance and that when target sizes are artificial… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2003
2003
2018
2018

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 56 publications
(13 citation statements)
references
References 24 publications
0
13
0
Order By: Relevance
“…Some previous work has presented the idea that the user can zoom in on specific parts of the interface in the process of selecting smaller targets [Bates and Istance, 2002]. Ultimately, this type of input system could potentially increase the appeal of games as well as increasing accessibility since hands free control is novel and completely unobtrusive.…”
Section: Discussionmentioning
confidence: 99%
“…Some previous work has presented the idea that the user can zoom in on specific parts of the interface in the process of selecting smaller targets [Bates and Istance, 2002]. Ultimately, this type of input system could potentially increase the appeal of games as well as increasing accessibility since hands free control is novel and completely unobtrusive.…”
Section: Discussionmentioning
confidence: 99%
“…Spaceefficient techniques, such as hierarchical menus, can facilitate interaction with large targets, but slow down performance, as they require more steps for selection. Dynamic approaches, such as zooming or fisheye lenses [1,4,9] can make selection more robust, but may visually distract the user. Gaze gestures [8,16,33,47] have been found to be more robust against noise in comparison to target-based selections, but may require unnaturally large saccades to overcome tracking problems and may be hard to learn.…”
Section: Design Approaches To Robust Interactionmentioning
confidence: 99%
“…The keyboard is moved to the top where tracking is relatively best, and letters are grouped to account for the vertical offset and spread of gaze points (grouped-key text entry may then be disambiguated algorithmically using text prediction, or combined with a secondary input action to select individual characters). The links under "Trends" are merged into one gaze target, while keeping them visually as before; if activated, an alternative pointing technique like zooming [4,10] or fish-eye lenses [1] could be applied. Similarly, the scroll buttons are kept as before, but their active gaze region expands beyond the screen and toward the static message window.…”
Section: Error-aware and Adaptive Applicationsmentioning
confidence: 99%
“…There are four techniques for measuring eye movements: (1) Electrooculography (EOG); (2) by means of suction cups or contact lenses; (3), by photo-or-video oculography; and (4) video detection based on the pupil and the corneal reflection ( Duchowski, 2007 ). This lattermost technique allows researchers to measure a point of regard in relation to what is being observed, which can be of high or low precision (based on the type of application required) ( Bates and Istance, 2002 ; Biswas, 2016 ). Although there are some discussions on whether the change of observation positions may have random behavior, the most recent studies indicate that the process of fixing the gaze and looking at an object in a scene is an efficient, non-random process ( Rajashekar, 2004 ; Riche et al., 2013 ), which is regulated by exogenous (stimulus-driven) or endogenous (cognitively-driven) factors ( Smith and Mital, 2013 ).…”
Section: Introductionmentioning
confidence: 99%
“…With calibration, these devices can detect the user's point of interest, i.e. the point at which they are actually looking ( Bates and Istance, 2002 ; Ward et al., 2000 ; MacKenzie et al., 2012 ).…”
Section: Introductionmentioning
confidence: 99%