Most work on the usability of touchscreen interaction for people with motor impairments has focused on lab studies with relatively few participants and small cross-sections of the population. To develop a richer characterization of use, we turned to a previously untapped source of data: YouTube videos. We collected and analyzed 187 noncommercial videos uploaded to YouTube that depicted a person with a physical disability interacting with a mainstream mobile touchscreen device. We coded the videos along a range of dimensions to characterize the interaction, the challenges encountered, and the adaptations being adopted in daily use. To complement the video data, we also invited the video uploaders to complete a survey on their ongoing use of touchscreen technology. Our findings show that, while many people with motor impairments find these devices empowering, accessibility issues still exist. In addition to providing implications for more accessible touchscreen design, we reflect on the application of usergenerated content to study user interface design.
As mobile devices like the iPad and iPhone become increasingly commonplace, touchscreen interactions are quickly overtaking other interaction methods in terms of frequency and experience for many users. However, most of these devices have been designed for the general, typical user. Trends indicate that children are using these devices (either their parents' or their own) for entertainment or learning activities. Previous work has found key differences in how children use touch and surface gesture interaction modalities vs. adults. In this paper, we specifically examine the impact of these differences in terms of automatically and reliably understanding what kids meant to do. We present a study of children and adults performing touch and surface gesture interaction tasks on mobile devices. We identify challenges related to (a) intentional and unintentional touches outside of onscreen targets and (b) recognition of drawn gestures, that both indicate a need to design tailored interaction for children to accommodate and overcome these challenges.
Current standard interfaces for entering mathematical equations on computers are arguably limited and cumbersome. Mathematics notations have evolved to aid visual thinking and yet text-based interfaces relying on keyboard-and-mouse input do not take advantage of the natural two-dimensional aspects of math. Due to its similarities to paper-based mathematics, pen-based handwriting input may be faster, more efficient, and more preferable for entering mathematics on computers. This paper presents an empirical study that tests this hypothesis. We also explored a multimodal input method combining handwriting and speech because we hypothesize that it may enhance computer recognition and aid user cognition. Novice users were indeed faster, more efficient and enjoyed the handwriting modality more than a standard keyboardand-mouse mathematics interface, especially as equation length and complexity increased. The multimodal handwriting-plus-speech method was faster and better liked than the keyboard-and-mouse method and was not much worse than handwriting alone.
Abstract. We present a technique that classifies users' age group, i.e., child or adult, from touch coordinates captured on touch-screen devices. Our technique delivered 86.5% accuracy (user-independent) on a dataset of 119 participants (89 children ages 3 to 6) when classifying each touch event one at a time and up to 99% accuracy when using a window of 7+ consecutive touches. Our results establish that it is possible to reliably classify a smartphone user on the fly as a child or an adult with high accuracy using only basic data about their touches, and will inform new, automatically adaptive interfaces for touch-screen devices.
Surface gesture interaction styles used on modern mobile touchscreen devices are often dependent on the platform and application. Some applications show a visual trace of gesture input as it is made by the user, whereas others do not. Little work has been done examining the usability of visual feedback for surface gestures, especially for children. In this paper, we present results from an empirical study conducted with children, teens, and adults to explore characteristics of gesture interaction with and without visual feedback. We find that the gestures generated with and without visual feedback by users of different ages diverge significantly in ways that make them difficult to interpret. In addition, users prefer to see visual feedback. Based on these findings, we present several design recommendations for new surface gesture interfaces for children, teens, and adults on mobile touchscreen devices. In general, we recommend providing visual feedback, especially for children, wherever possible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.