The quality and availability of eye tracking equipment has been increasing while costs have been decreasing. These trends increase the possibility of using eye trackers for entertainment purposes. Games that can be controlled solely through movement of the eyes would be accessible to persons with decreased limb mobility or control. On the other hand, use of eye tracking can change the gaming experience for all players, by offering richer input and enabling attention-aware games. Eye tracking is not currently widely supported in gaming, and games specifically developed for use with an eye tracker are rare. This paper reviews past work on eye tracker gaming and charts future development possibilities in different subdomains within. It argues that based on the user input requirements and gaming contexts, conventional computer games can be classified into groups that offer fundamentally different opportunities for eye tracker input. In addition to the inherent design issues, there are challenges and varying levels of support for eye tracker use in the technical implementations of the games.
Abstract-Spatial information can be difficult to present to a visually impaired computer user. In this paper we examine a new kind of tactile cueing for non-visual interaction as a potential solution, building on earlier work on vibrotactile Tactons. However, unlike vibrotactile Tactons, we use a pin array to stimulate the finger tip. Here, we describe how to design static and dynamic Tactons by defining their basic components. We then present user tests examining how easy it is to distinguish between different forms of pin array Tactons demonstrating accurate Tacton sets to represent directions. These experiments demonstrate usable patterns for static, wave and blinking pin array Tacton sets for guiding a user in one of eight directions. A study is then described that shows the benefits of structuring Tactons to convey information through multiple parameters of the signal. By using multiple independent parameters for a Tacton, this study demonstrates participants perceive more information through a single Tacton. Two applications using these Tactons are then presented: a maze exploration application and an electric circuit exploration application designed for use by and tested with visually impaired users.
We evaluated the performance of a wheel mouse, an XBox360 controller, the combination of a mouse and a keyboard, and a Trackmouse in FPS target acquisition. The device combinations where mouse was used for aiming performed better than the Xbox360 controller.
Smart glasses are autonomous and efficient computers that can perform complex tasks through mobile applications. This paper focuses on text input for mobile context. We present a new connected fabric to smart glasses as a device for text entry. This device, integrated into clothing, provides a new interaction technique called TEXTile. It allows typing without requiring users to hold a device. Users can put fingers anywhere on the fabric surface and release them, without needing to look at the fabric or using markers on the fabric. The text entry technique is based on eight combinations of fingers in contact or released, identified as pleasant among 15 in a survey involving 74 participants. A first user's study with 20 participants establishes that the eight combinations for TEXTile were significantly reliable (98.95% recognition rate). A second study with nine participants evaluates the learning curve of TEXTile. Users achieved a mean typing of 8.11 WPM at the end of ten 12-minute sessions, which can be slow, but sufficient with short text compared to other advantages of the technique. Results show low error rates for tasks completed and good usability (76% in SUS questionnaire). The NASA-TLX questionnaire establishes there is no important mental or physical workload to accomplish the task.
We describe the design and evaluation of a gestural text editing technique for touchscreen devices. The gestures are drawn on top of the soft keyboard and interpreted as commands for moving the caret, performing selections, and controlling the clipboard. Our implementation is an Android service that can be used in any text editing task on Androidbased devices. We conducted an experiment to compare the gestural editing technique against the widget-based technique available on a smartphone (Samsung Galaxy II with Android 2.3.5). The results show a performance benefit of 13-24% for the gestural technique depending on the font size. Subjective feedback from the participants was also positive. Because the two editing techniques use different input areas, they can coexist on a device. This means that the gestural editing can be added on any soft keyboard without interfering with user experience for those users that choose not to use it.
We describe a system that informs the users of the shape of the EdgeWrite characters within the visual feedback area of EdgeWrite. We compared two versions (static and dynamic) of this design to a printed character chart in a five-session text entry experiment with three 8-participant groups. The participants were able to use EdgeWrite with the integrated help systems. There were no statistically significant differences in text entry rate between the group using the character chart and the two groups using the integrated help. However, the group with the dynamic help was faster than the group with the static help while maintaining a low corrected error rate.
In this paper we investigate the effectiveness of the gaze interaction within near-to-eye display. The practical aspect of the paper is about combining an eye tracker with smart glasses. Presented research is related with the eGlasses project, which is focused on the development of an open platform in the form of multisensory electronic glasses and related interaction methods. One of the implemented interaction methods is the one based on eye tracking module. Both, the advantages and limitation of the method are discussed. We also considering the calibration free method of fixation points estimation within the near-to-eye display. Keywords-eye tracking; smart glasses; interaction with microdisplay; hands free interface;978-1-4673-6936-7/15/$31.00 ©2015 IEEE
This paper presents a comparative pilot usability study of Dasher and an on-screen keyboard on a head-mounted display. Interaction logging data was captured along with subjective responses (via the SUS questionnaire). The results indicate that there is a strong need to develop text entry systems for smart glasses rather to simply adopt those that are already available. However, both approaches are useful when there is a need to enter private or sensitive data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.