This paper offers a solution to the mode problem in computer sketch/notetaking programs. Conventionally, the user must specify the intended "draw" or "command" mode prior to performing a stroke. This necessity has proven to be a barrier to the usability of pen/stylus systems. We offer a novel Inferred-Mode interaction protocol that avoids the mode hassles of conventional sketch systems. The system infers the user's intent, if possible, from the properties of the pen trajectory and the context of the trajectory. If the intent is ambiguous, the user is offered a choice mediator in the form of a pop-up button. To maximize the fluidity of drawing, the user is entitled to ignore the mediator and continue drawing. We present decision logic for the inferred mode protocol, and discuss subtleties learned in the course of its development. We also present results of initial user trials validating the usability of this interaction design.
Large displays and information kiosks are becoming increasingly common installations in public venues to provide an efficient self-serve means for patrons to access information and/or services. They have evolved over a relatively short period of time from non-digital, non-interactive static displays to more elaborate media-rich digital interactive systems. While the content and purposes of kiosks have changed, they are still largely based on the traditional single-user-driven design paradigm despite the fact that people often venture to these venues in small social groups, i.e., with family and/or friends. This often limits how groups collaborate and forces transactions to be serialized. This thesis explores design constraints for interaction by multiple social groups in parallel on shared large vertical displays.To better understand design requirements for these systems, this research is separated into two parts: a preliminary observational field study and a follow-up controlled study. Using an observational field study, fundamental patterns of how people use existing public displays are studied: their orientation, positioning, group identification, and behaviour within and between social groups just-before, during, and just-after usage. These results are then used to motivate a controlled experiment where two individuals or two pairs of individuals complete tasks concurrently on a lowfidelity large vertical display. Results from the studies demonstrate that vertical surface territories are similar to those found in horizontal tabletops in function, but their definitions and social conventions are different. In addition, the nature of use-while-standing systems results in more complex and dynamic physical territories around the display. We show that the anthropological notion of personal space must be slightly refined for application to vertical displays. Lastly, I would like to thank those who voluntarily (and involuntarily) participated in the studies conducted as part of this research which without whom, this work would not be possible.v
When using motion gestures, 3D movements of a mobile phone, as an input modality, one significant challenge is how to teach end users the movement parameters necessary to successfully issue a command. Is a simple video or image depicting movement of a smartphone sufficient? Or do we need three-dimensional depictions of movement on external screens to train users? In this paper, we explore mechanisms to teach end users motion gestures, examining two factors. The first factor is how to represent motion gestures: as icons that describe movement, video that depicts movement using the smartphone screen, or a Kinect-based teaching mechanism that captures and depicts the gesture on an external display in three-dimensional space. The second factor we explore is recognizer feedback, i.e. a simple representation of the proximity of a motion gesture to the desired motion gesture based on a distance metric extracted from the recognizer. We show that, by combining video with recognizer feedback, participants master motion gestures equally quickly as end users that learn using a Kinect. These results demonstrate the viability of training end users to perform motion gestures using only the smartphone display.
This paper describes on-going work in the analysis of motion dynamics in pen-based interaction. The overall goal is the creation of a model of user motion in pen gestures where constraint and curvature vary over the length of the path. In particular, speed/curvature models of motion are used to analyze pen trajectories and infer target constraints obeyed by a user performing selection gestures. We aim to use this information to calculate an effective local spatial selection tolerance associated with each gesture. This can be used to perform selection according to user intent instead of their literal stroke. Here, we describe our early analysis of constrained user selection gestures, and outline a prototype application that infers a tolerance for one type of selection gesture. The application selectively splits pen strokes based on an analysis of user motion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.