Figure 1. (a) very large (5m x 1.8m), high resolution (6144 x 2304 pixels) display; (b) visualization showing ambiguous posture threshold warning; (c) the hand controls pointer position and makes "click" selection with finger or thumb.
Recent advances in sensing technology have enabled a new generation of tabletop displays that can sense multiple points of input from several users simultaneously. However, apart from a few demonstration techniques [17], current user interfaces do not take advantage of this increased input bandwidth. We present a variety of multifinger and whole hand gestural interaction techniques for these displays that leverage and extend the types of actions that people perform when interacting on real physical tabletops. Apart from gestural input techniques, we also explore interaction and visualization techniques for supporting shared spaces, awareness, and privacy. These techniques are demonstrated within a prototype room furniture layout application, called RoomPlanner.
We investigate the differences -in terms of both quantitative performance and subjective preference -between direct-touch and mouse input for unimanual and bimanual tasks on tabletop displays. The results of two experiments show that for bimanual tasks performed on tabletops, users benefit from direct-touch input. However, our results also indicate that mouse input may be more appropriate for a single user working on tabletop tasks requiring only single-point interaction.
Conference on Human Factors in Computer Systems (CHI)This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved.
ABSTRACTWe investigate the differences -in terms of both quantitative performance and subjective preferencebetween direct-touch and mouse input for unimanual and bimanual tasks on tabletop displays. The results of two experiments show that for bimanual tasks performed on tabletops, users benefit from direct-touch input. However, our results also indicate that mouse input may be more appropriate for a single user working on tabletop tasks requiring only single-point interaction.
Volumetric displays, which display imagery in true 3D space, are a promising platform for the display and manipulation of 3D data. To fully leverage their capabilities, appropriate user interfaces and interaction techniques must be designed. In this paper, we explore 3D selection techniques for volumetric displays. In a first experiment, we find a ray cursor to be superior to a 3D point cursor in a single target environment. To address the difficulties associated with dense target environments we design four new ray cursor techniques which provide disambiguation mechanisms for multiple intersected targets. Our techniques showed varied success in a second, dense target experiment. One of the new techniques, the depth ray, performed particularly well, significantly reducing movement time, error rate, and input device footprint in comparison to the 3D point cursor.
We explore the feasibility of muscle-computer interfaces (muCIs): an interaction methodology that directly senses and decodes human muscular activity rather than relying on physical device actuation or user actions that are externally visible or audible. As a first step towards realizing the mu-CI concept, we conducted an experiment to explore the potential of exploiting muscular sensing and processing technologies for muCIs. We present results demonstrating accurate gesture classification with an off-the-shelf electromyography (EMG) device. Specifically, using 10 sensors worn in a narrow band around the upper forearm, we were able to differentiate position and pressure of finger presses, as well as classify tapping and lifting gestures across all five fingers. We conclude with discussion of the implications of our results for future muCI designs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.