In 2D interfaces, actions are often represented by fixed tools arranged in menus, palettes, or dedicated parts of a screen, whereas 3D interfaces afford their arrangement at different depths relative to the user and the user can move them relative to each other. In this paper we introduce EyeSeeThrough as a novel interaction technique that utilises eye-tracking in VR. The user can apply an action to an intended object by visually aligning the object with the tool at the line-of-sight, and then issue a confirmation command. The underlying idea is to merge the two-step process of 1) selection of a mode in a menu and 2) applying it to a target, into one unified interaction. We present a user study where we compare the method to the baseline two-step selection. The results of our user study showed that our technique outperforms the two step selection in terms of speed and comfort. We further developed a prototype of a virtual living room to demonstrate the practicality of the proposed technique.
Shahram Jalaliniya is a phD fellow at the It university of Copenhagen, where he is a member of the pervasive Interaction technology (pIt) lab. His research interests include wearable computing, HCI, pervasive computing, and multimodal interaction. Jalaliniya has a master's degrees in information systems from lund university and in software and technology from the It university of Copenhagen. He is a member of IEEE. Contact him at jsha@itu.dk.Thomas Pederson is an associate professor at the It university of Copenhagen, where he is a member of the pervasive Interaction technology (pIt) lab and the Interaction Design Group. His research interests include HCI, contextaware systems, and pervasive/ubiquitous computing. pederson has a phD in computing science from umeå university, sweden. He is a member of the ACm. Contact him at tped@itu.dk.
In this paper, we present our body-and-mind-centric approach for the design of wearable personal assistants (WPAs) motivated by the fact that such devices are likely to play an increasing role in everyday life. We also report on the utility of such a device for orthopedic surgeons in hospitals. A prototype of the WPA was developed on Google Glass for supporting surgeons in three different scenarios: (1) touch-less interaction with medical images, (2) tele-presence during surgeries, and (3) mobile access to Electronic Patient Records (EPR) during ward rounds. We evaluated the system in a clinical simulation facility and found that while the WPA can be a viable solution for touch-less interaction and remote collaborations during surgeries, using the WPA in the ward rounds might interfere with social interaction between clinicians and patients. Finally, we present our ongoing exploration of gaze and gesture as alternative input modalities for WPAs inspired by the hospital study.
We conducted an empirical study of 57 children using a printed Booklet and a digital Tablet instruction for LEGO R construction while they wore a head-mounted gaze tracker. Booklets caused a particularly strong pupil dilation when encountered as the first media. Subjective responses confirmed the booklet to be more difficult to use. The children who were least productive and asked for assistance more often had a significantly different pupil pattern than the rest. Our findings suggest that it is possible to collect pupil size data in unconstrained work scenarios, providing insight to task effort and difficulties.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.