The size of human fingers and the lack of sensing precision can make precise touch screen interactions difficult. We present a set of five techniques, called Dual Finger Selections, which leverage the recent development of multitouch sensitive displays to help users select very small targets. These techniques facilitate pixel-accurate targeting by adjusting the control-display ratio with a secondary finger while the primary finger controls the movement of the cursor. We also contribute a "clicking" technique, called SimPress, which reduces motion errors during clicking and allows us to simulate a hover state on devices unable to sense proximity. We implemented our techniques on a multi-touch tabletop prototype that offers computer visionbased tracking. In our formal user study, we tested the performance of our three most promising techniques (Stretch, X-Menu, and Slider) against our baseline (Offset), on four target sizes and three input noise levels. All three chosen techniques outperformed the control technique in terms of error rate reduction and were preferred by our participants, with Stretch being the overall performance and preference winner.
It is generally assumed that touch input cannot be accurate because of the fat finger problem, i.e., the softness of the fingertip combined with the occlusion of the target by the finger. In this paper, we show that this is not the case. We base our argument on a new model of touch inaccuracy. Our model is not based on the fat finger problem, but on the perceived input point model. In its published form, this model states that touch screens report touch location at an offset from the intended target. We generalize this model so that it represents offsets for individual finger postures and users. We thereby switch from the traditional 2D model of touch to a model that considers touch a phenomenon in 3-space. We report a user study, in which the generalized model explained 67% of the touch inaccuracy that was previously attributed to the fat finger problem. In the second half of this paper, we present two devices that exploit the new model in order to improve touch accuracy. Both model touch on per-posture and per-user basis in order to increase accuracy by applying respective offsets. Our RidgePad prototype extracts posture and user ID from the user's fingerprint during each touch interaction. In a user study, it achieved 1.8 times higher accuracy than a simulated capacitive baseline condition. A prototype based on optical tracking achieved even 3.3 times higher accuracy. The increase in accuracy can be used to make touch interfaces more reliable, to pack up to 3.3 2 > 10 times more controls into the same surface, or to bring touch input to very small mobile devices.
Author KeywordsTouch, touch pad, touch screen, precision, targeting, fingerprint scanner, 6DOF, mobile devices, input, pointing.
ACM Classification Keywords
H5.2 [Information interfaces and presentation]: User Interfaces. Input devices & strategies; B 4.2 Input Output devices.
General TermsDesign, Experimentation, Human Factors, Measurement, Performance, Theory.
In this paper, we explore how to add pointing input capabilities to very small screen devices. On first sight, touchscreens seem to allow for particular compactness, because they integrate input and screen into the same physical space. The opposite is true, however, because the user's fingers occlude contents and prevent precision.We argue that the key to touch-enabling very small devices is to use touch on the device backside. In order to study this, we have created a 2.4" prototype device; we simulate screens smaller than that by masking the screen. We present a user study in which participants completed a pointing task successfully across display sizes when using a back-of device interface. The touchscreen-based control condition (enhanced with the shift technique), in contrast, failed for screen diagonals below 1 inch. We present four form factor concepts based on back-of-device interaction and provide design guidelines for extracted from a second user study.
We present a quantitative analysis of delimiters for pen gestures. A delimiter is "something different" in the input stream that a computer can use to determine the structure of input phrases. We study four techniques for delimiting a selection-action gesture phrase consisting of lasso selection plus marking-menu-based command activation. Pigtail is a new technique that uses a small loop to delimit lasso selection from marking ( Fig. 1). Handle adds a box to the end of the lasso, from which the user makes a second stroke for marking. Timeout uses dwelling with the pen to delimit the lasso from the mark. Button uses a button press to signal when to delimit the gesture. We describe the role of delimiters in our Scriboli pen interaction testbed, and show how Pigtail supports scope selection, command activation, and direct manipulation all in a single fluid pen gesture.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.