^P^ipa SITUATED INFORMATIONPATIALLY AWARE COMPUTERS îthered to a stationary computer workstation to browse electronic databases or synthetic 3D information spaces transformed onto a 2D display surface. Instead, we will browse, interact, and manipulate electronic information within the context and situation in which the information originated and where it holds strong meaning. A small, portable, high-fidelity displa\ and spatially aware palmtop computer can act as a window onto the 3D-situated information space-providing a bridge between the computer-synthesized data an(d physical objects. Our Chameleon prototype explores some of the combined input ontroller and output display paradigms needed to visuilize and manipulate 3D-situated information spaces. Electronic information spaces are encroaching on our everyday environment. We are increasingly carrying electronic information with us (e.g., floppy diskettes) and also tapping into reservoirs of information via access stations (e.g., automatic teller machines, telephones). Indeed, portable computing allows us not only to carry the information but also to access, modify, and t with it in a matter of seconds.
An experimental GUI paradigm is presented which is based on the design goals of maximizing the amount of screen used for application data, reducing the amount that the UI diverts visual attentions from the application data, and increasing the quality of input. In pursuit of these goals, we integrated the non-standard UI technologies of multi-sensor tablets, toolglass, transparent UI components, and marking menus. We describe a working prototype of our new paradigm, the rationale behind it and our experiences introducing it into an existing application. Finally, we presents some of the lessons learned: prototypes are useful to break the barriers imposed by conventional GUI design and some of their ideas can still be retrofitted seamlessly into products. Furthermore, the added functionality is not measured only in terms of user performance, but also by the quality of interaction, which allows artists to create new graphic vocabularies and graphic styles.
Despite the considerable quantity of research directed towards multitouch technologies, a set of standardized UI components have not been developed. Menu systems provide a particular challenge, as traditional GUI menus require a level of pointing precision inappropriate for direct finger input. Marking menus are a promising alternative, but have yet to be investigated or adapted for use within multitouch systems. In this paper, we first investigate the human capabilities for performing directional chording gestures, to assess the feasibility of multitouch marking menus. Based on the positive results collected from this study, and in particular, high angular accuracy, we discuss our new multitouch marking menu design, which can increase the number of items in a menu, and eliminate a level of depth. A second experiment showed that multitouch marking menus perform significantly faster than traditional hierarchal marking menus, reducing acquisition times in both novice and expert usage modalities.
We are exploring how virtual reahty theories can be applied toward palmtop computers. In our prototype, called the Chameleon, a small 4-inch hand-held monitor acts as a palmtop computer with the capabihties of a Silicon graphics workstation.A 6D input device and a response button are attached to tbe small monitor to detect user gestures and input selections for issuing commands. An experiment was conducted to evaluate our design and to see how well depth could be perceived in the small screen compared to a large 21-inch screen, and the extent to which movement of the small display (in a palmtop virtual reality condition) could improve depth perception, Results show that with very little training, perception of depth in the palmtop virtual reality condition is about as good as corresponding depth perception in a large (but static) display. Variations to the initial design are also discussed, along with issues to be explored in future research, Our research suggests that palmtop virtual reality may support effective navigation and search and retrieval, in rich and portable information spaces.
No abstract
YouMove is a novel system that allows users to record and learn physical movement sequences. The recording system is designed to be simple, allowing anyone to create and share training content. The training system uses recorded data to train the user using a large-scale augmented reality mirror. The system trains the user through a series of stages that gradually reduce the user's reliance on guidance and feedback. This paper discusses the design and implementation of YouMove and its interactive mirror. We also present a user study in which YouMove was shown to improve learning and short-term retention by a factor of 2 compared to a traditional video demonstration.
This paper reports on the experimental evaluation of a GraspableUser Interface that employs a "spacemultiplexing" input scheme in which each function to be controlled has a dedicated physical transducer, each occupying its own space. This input style contrasts the more traditional "time-multiplexing" input scheme which uses one device (such as the mouse) to control different functions at different points in time. A tracking experiment was conducted to compare a traditional GUI design with its time-multiplex input scheme versus a Graspable UI design having a space-multiplex input scheme. We found that the space-multiplex conditions out perform the time-multiplex conditions. In addition, we found that the use of specialized physical form factors for the input devices instead of generic form factors provide a performance advantage. We rugue that the specialized devices serve as both visual and tactile functional reminders of the associated tool assignment as well as facilitate manipulation due to the customized form factors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.