This paper discusses several usability issues related to the use of gestures as an input mode in multimodal interfaces. The use of gestures has been suggested before as a natural solution for applications that require hands-free and notouch interaction with computers, such as in virtual reality (VR) environments. We introduce a simple but robust 2D computer vision based gesture recognition system that was successfully used for interaction in VR environments such as CAVEs and Powerwalls. This interface was tested under 3 different scenarios, as a regular pointing device in a GUI interface, as a navigation tool, and as a visualization tool. Our experiments show that the time to completion of simple pointing tasks is considerably slower when compared to a mouse and that its use during even short periods of time causes fatigue. Despite these drawbacks, the use of gestures as an alternative mode in multimodal interfaces offers several advantages, such as quick access to computing resources that might be embedded in the environment, using a natural and intuitive way, and that scales nicely to group and collaborative applications, where gestures can be used sporadically.
Modeling large architectural environments is a difficult task due to the intricate nature of these models and the complex dependencies between the structures represented. Moreover, textures are an essential part of architectural models. While the number of geometric primitives is usually relatively low (i.e., many walls are flat surfaces), textures actually contain many detailed architectural elements. We present an approach for modeling architectural scenes by reshaping and combining existing textured models, where the manipulation of the geometry and texture are tightly coupled. For geometry, preserving angles such as floor orientation or vertical walls is of key importance. We thus allow the user to interactively modify lengths of edges, while constraining angles. Our texture reshaping solution introduces a measure of directional autosimilarity, to focus stretching in areas of stochastic content and to preserve details in such areas. We show results on several challenging models, and show two applications: Building complex road structures from simple initial pieces and creating complex game-levels from an existing game based on pre-existing model pieces.
In this paper we present our experience in using Virtual Reality Technologies to accurately reconstruct and further explore ancient and historic city buildings. Virtual reality techniques provide a powerful set of tools to explore and access the history of a city. In order to explore, visualize and hear such history, we divided the process in three phases: historical data gathering and analysis; 3D reconstruction and modeling; interactive immersive visualization, auralization and display. The set of guidelines devised helped to put into practice the extensible tools available in VR but not always easy to put together by inexperienced users. These guidelines also helped the smoothness of our work and helped avoiding problems in the subsequent phases. Most importantly, the X3D standard provided an environment capable of helping the design and validation process as well as the visualization phase. To finalize, we present the results achieved and further analyze the extensibility of the framework. Although VR tools and techniques are widely available at present, there is still a gap between using the tools and really taking advantage of VR in historic architectural reconstruction so that users might immerse themselves into this world and thus be able to consider various scenarios and possibilities that might lead to new insightful inspiration. This is an ongoing process that we think will increase and help current architectural development.
Inspired by principles for designing musical instruments we implemented a new 3D virtual instrument with a particular mapping of touchable virtual spheres to notes and chords of a given musical scale. The objects are metaphors for note keys organized in multiple lines, forming a playable spatial instrument where the player can perform certain sequences of notes and chords across the scale employing short gestures that minimizes jump distances. The idea of different arrangements for notes over the playable space is not new, being pursued on alternative keyboards for instance. This implementation employed an Oculus Rift and a Razer Hydra for gesture input and showed that customization of instrumental mappings using 3D tools can contribute to ease the performance of complex songs by allowing fast execution of specific note combinations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.