This paper presents a new type of human-computer interface called Pico (Physical Intervention in Computational Optimization) based on mechanical constraints that combines some of the tactile feedback and affordances of mechanical systems with the abstract computational power of modern computers. The interface is based on a tabletop interaction surface that can sense and move small objects on top of it. The positions of these physical objects represent and control parameters inside a software application, such as a system for optimizing the configuration of radio towers in a cellular telephone network. The computer autonomously attempts to optimize the network, moving the objects on the table as it changes their corresponding parameters in software. As these objects move, the user can constrain their motion with his or her hands, or many other kinds of physical objects. The interface provides ample opportunities for improvisation by allowing the user to employ a rich variety of everyday physical objects as mechanical constraints. This approach leverages the user's mechanical intuition for how objects respond to physical forces. As well, it allows the user to balance the numerical optimization performed by the computer with other goals that are difficult to quantify. Subjects in an evaluation were more effective at solving a complex spatial layout problem using this system than with either of two alternative interfaces that did not feature actuation.
We present an experiment on cooperative bimanual action. Right-handed subjects manipulated a pair of physical objects, a tool and a target object, so that the tool would touch a target on the object @g. 1). For this task, there is a marked specialization of the hands. Performance is best when the left hand orients the target object and the right hand manipulates the tool, but is significantly reduced when these roles are reversed. This suggests that the right hand operates relative to the frame-of-reference of the left hand.Furthermore, when physical constraints guide the tool placement, this fundamentally changes the type of motor control required. The task is tremendously simplified for both hands, and reversing roles of the hands is no longer an important factor. Thus, specialization of the roles of the hands is significant only for skilled manipulation.
We present a study comparing how people use space in a Tangible User Interface (TUI) and in a Graphical User Interface (GUI).We asked subjects to read ten summaries of recent news articles and to think about the relationships between them. In our TUI condition, we bound each of the summaries to one of ten visually identical wooden blocks. In our GUI condition, each summary was represented by an icon on the screen. We asked subjects to indicate the location of each summary by pointing to the corresponding icon or wooden block. Afterward, we interviewed them about the strategies they used to position the blocks or icons during the task.We observed that TUI subjects performed better at the location recall task than GUI subjects. In addition, some TUI subjects used the spatial relationship between specific blocks and parts of the environment to help them remember the content of those blocks, while GUI subjects did not do this. Those TUI subjects who reported encoding information using this strategy tended to perform better at the recall task than those who did not.
The task of organizing information is typically performed either by physically manipulating note cards or sticky notes or by arranging icons on a computer with a graphical user interface. We present a new tangible interface platform for manipulating discrete pieces of abstract information, which attempts to combine the benefits of each of these two alternatives into a single system. We developed interaction techniques and an example application for organizing conference papers. We assessed the effectiveness of our system by experimentally comparing it to both graphical and paper interfaces. The results suggest that our tangible interface can provide a more effective means of organizing, grouping, and manipulating data than either physical operations or graphical computer interaction alone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.