Hand tracking enables controller-free interaction with virtual environments, which can, compared to traditional handheld controllers, make virtual reality (VR) experiences more natural and immersive. As naturalness hinges on both technological and user-based features, fine-tuning the former while assessing the latter can be used to increase usability. For a grab-and-place use case in immersive VR, we compared a prototype of a camera-based hand tracking interface (Leap Motion) with customized design elements to the standard Leap Motion application programming interface (API) and a traditional controller solution (Oculus Touch). Usability was tested in 32 young healthy participants, whose performance was analyzed in terms of accuracy, speed and errors as well as subjective experience. We found higher performance and overall usability as well as overall preference for the handheld controller compared to both controller-free solutions. While most measures did not differ between the two controller-free solutions, the modifications made to the Leap API to form our prototype led to a significant decrease in accidental drops. Our results do not support the assumption of higher naturalness for hand tracking but suggest design elements to improve the robustness of controller-free object interaction in a grab-and-place scenario.
Hand tracking enables controller-free interaction with virtual environments, which can, compared to traditional handheld controllers, make virtual reality (VR) experiences more natural and immersive. As naturalness hinges on both technological and user-based features, fine-tuning the former while assessing the latter can be used to increase usability. For a grab-and-place use case in immersive VR, we compared a prototype of a camera-based hand tracking interface (Leap Motion) with customized design elements to the standard Leap Motion application programming interface (API) and a traditional controller solution (Oculus Touch). Usability was tested in 32 young healthy participants, whose performance was analyzed in terms of accuracy, speed and errors as well as subjective experience. We found higher performance and overall usability as well as overall preference for the handheld controller compared to both controller-free solutions. While most measures did not differ between the two controller-free solutions, the modifications made to the Leap API to form our prototype led to a significant decrease in accidental drops. Our results do not support the assumption of higher naturalness for hand tracking but suggest design elements to improve the robustness of controller-free object interaction in a grab-and-place scenario.
In order to achieve a realistic image representation with a high degree of telepresence, stereoscopy is known as a very powerful means. Motion parallax is another important visual cue. It contributes to the naturalness of vision and can reduce artifacts of common stereoscopic representation techniques. This paper focuses on the ratio of (real or virtual) camera movement to head movement ("gain of motion parallax") and on the image reproduction conditions that allow perceiving a stable and natural stereoscopic image providing motion parallax. Results show that the gain of motion parallax should be adjustable by the viewer. For most observers, the preferred gain is lower than the geometrically correct value of 1. A good starting point seams to be a value of 0.75. All three dimensions of head movement and the individual (and currently actual) interocular distance should be taken into account when calculating the appropriate views. Spatial quantization of eye position should be better than 5 min of arc, and temporal sampling of eye position should be done with 40 Hz or more
Telex 185 995 hhi d boecker@ hhi.deThe paper addresses the question whether reproducing motion parallax increases the extent of telepresence in videocommunications. Motion parallax is defined as the change of the view due to the observer's movements. It was hypothesized that reproducing motion parallax (a) leads to more precise depth judgments by providing further depth cues, (b) allows 'interactive viewing', i s . the observer can actively explore the visual scene by changing hisher position, and (c) compensates for stereoscopic "apparent movements". In a Human Factors study, two videoconferencing set-ups providing motion parallax (one stereoscopic and one monoscopic version) were compared with two set-ups (monoscopic and stereoscopic) without motion parallax. Each set-up was used and rated by 32 subjects. The results supported the hypotheses only in part. Even though there was some evidence for more "spatial presence" and for a greater explorability of the scene through motion parallax, the compensation of apparent movements could not be achieved.
INTRODUCTIONhigh degree of telepresence requires a realistic sensory (visual and auditory)-representation of the remote scene including depth cues and communicative signals. This paper concentrates on the effects of the reproduction of motion parallax on spatial presence.
Multimodal user interfaces promise a natural and intuitive human machine interactions. But is the extra effort for the development of a complex multi-sensor system justified, or can users also be satisfied with one input modality already? This study investigates interactions in an industrial weld inspection workstation. Three unimodal interfaces, including spatial interaction with buttons augmented on a workpiece or a worktable, and speech commands, were tested individually and in a multimodal combination. Within the unimodal conditions, users preferred the augmented worktable, but overall, the interindividual usage of all input technologies in the multimodal condition was ranked best. Our findings indicate that the implementation and use of multiple input modalities is valuable, and that it is difficult to predict the usability of individual input modalities for complex systems.
An observer moving in a natural environment is usually able to separate the constant changes of his retinal images in such a way that he perceives the environment and the changes of his observation point independently. The necessary and sufficient conditions to perceive a stable environment in spite of the retinal change produced by self-motion are, however, as yet unknown. We found that under certain conditions a scene that changes during observer motion can appear more stable than a rigid one. In our experiment a scene consisting of a number of LEDs distributed in a dark room was visible through a window. A mechanical device controlled by a head-tracker was used to move the LEDs during head motion to either reduce or enhance motion parallax by a predefined gain factor. The subjects rated the scene with respect to different attributes including apparent deformation and degree of motion perceived. They were also asked to adjust the parallax gain to the value of greatest apparent stability of the scene. Monocular as well as binocular trials were conducted and different fixation points were employed. The result was a general tendency in all conditions to perceive scene motion when the scene was in fact rigid and to perceive the greatest stability when the scene was distorted in such a way as to produce reduced motion parallax.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.