This paper presents a set of interaction techniques for hands-free multi-scale navigation through virtual environments. We believe that hands-free navigation, unlike the majority of navigation techniques based on hand motions, has the greatest potential for maximizing the interactivity of virtual environments since navigation modes are offloaded from modal hand gestures to more direct motions of the feet and torso. Not only are the users' hands freed to perform tasks such as modeling, notetaking and object manipulation, but we also believe that foot and torso movements may inherently be more natural for some navigation tasks. The particular interactions that we developed include a leaning technique for moving small and medium distances, a foot-gesture controlled Step WIM that acts as a floor map for moving larger distances, and a viewing technique that enables a user to view a full 360 degrees in only a three-walled semi-immersive environment by subtly amplifying the mapping between their torso rotation and the virtual world. We formatively designed and evaluated our techniques in existing projects related to archaeological reconstructions, free-form modeling, and interior design. In each case, our informal observations have indicated that motions such as walking and leaning are both appropriate for navigation and are effective in cognitively simplifying complex virtual environment interactions since functionality is more evenly distributed across the body.
We present a novel class of virtual reality input devices that combine pop through buttons with 6 DOF trackers. Compared to similar devices that use conventional buttons, pop through devices double the number of potential discrete interaction modes, since each button has two activation states corresponding to light and firm pressure. This additional state per button provides a foundation to address a range of shortcomings with conventional virtual environment input devices that includes reducing the physical dexterity required to perform interactions, reducing the cognitive complexity of some compound tasks, and enabling the design of less obtrusive devices without sacrificing expressive power. Specifically, we present two novel input devices: the FingerSleeve was designed to be minimally obtrusive physically, whereas the TriggerGun was designed to be physically similar to, yet more functional than a conventional hand-held trigger device. Further, we present a set of novel navigation and interaction techniques that leverage the capabilities of our pop through button devices to improve interaction quality, and provide insight into harnessing the potential of pop through buttons for other tasks. Finally, we discuss how we incorporated one of our devices into a real application.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.