The ability to use the movements of the eyes to write is extremely important for individuals with a severe motor disability. With eye typing, a virtual keyboard is shown on the screen and the user enters text by gazing at the intended keys one at a time. With dwell-based eye typing, a key is selected by continuously gazing at it for a specific amount of time. However, this approach has two possible drawbacks: unwanted selections and slow typing rates. In this study, we propose a dwell-free eye typing technique that filters out unintentionally selected letters from the sequence of letters looked at by the user. It ranks possible words based on their length and frequency of use and suggests them to the user. We evaluated Filteryedping with a series of experiments. First, we recruited participants without disabilities to compare it with another potential dwell-free technique and with a dwell-based eye typing interface. The results indicate it is a fast technique that allows an average of 15.95 words per minute after 100min of typing. Then, we improved the technique through iterative design and evaluation with individuals who have severe motor disabilities. This phase helped to identify and create parameters that allow the technique to be adapted to different users.
The development of interactive digital TV applications is hindered by the user-interaction options allowed when traditional remote controls are used. In this work, we describe the model of a software component that allows text entry in interactive TV applications based on an interface with multiple input modes -the component offers a virtual keyboard mode, a cell keypad mode, and a speech mode. We discuss our considerations with respect to the design, development and evaluation of a prototype corresponding to our model, built according to the user-centered design methodology. After conducting a research on existing text input methods in television systems, we interviewed four experts in the interactive TV domain. We also applied 153 questionnaires to TV users, with the aim of gathering a user profile of users who make use of text entry mechanisms. During the development of the prototype, we conducted usability tests using the think aloud protocol, and usability inspections using the heuristic evaluation and cognitive walkthrough techniques. The evaluations allowed the detection of both, a number of problems and of several improvement opportunities; at the same time that they highlighted the importance of using complementary text input modes in order to satisfy the needs of different users. Overall, the evaluation results suggest that the proposed approach provides a satisfactory level of usability by overcoming the limitations of text input in the context of user-interaction with interactive TV applications.
In most current digital TV applications the user interaction takes place by pressing keys on a remote control. For simple applications this type of interaction is sufficienthowever, as interactive applications become more popular new input devices are demanded. After discussing motivating scenarios, this paper presents an architecture that offers to applications running on a set-top-box the possibility of receiving multimodal data (audio, video, image, ink, accelerometer, text, voice and customized data) from multiple devices (such as mobile phones, PDAs, tablet PCs, notebooks or even desktops). We validated the architecture by implementing a corresponding multimodal interaction component which extends the Brazilian Digital TV middleware, and by building applications which use the component.
The growing ease of capturing and playing back videos using mobile devices demands the investigation of alternatives for improving the user experience, for instance by taking advantage of the expanding culture of end-user content generation. Advances in terms of end-user media capture and media combination aim at enriching and facilitating the authoring experience. This work explores the generation of textual annotations on videos played on mobile device: the approach is to offer an application that allows associating annotations to a navigation line decorated with frames that are representative of the points of interest. The aims is to improve the user experience of video annotation by allowing users to to find interesting points intuitively. Experiments with users identified of new issues, even though the application was considered easy to use.
Interactive tabletops offer a unique opportunity for exploring home videos and photos. Nevertheless, there are still a number of unexplored challenges for effectively providing support for collocated group interaction around media. This paper reports on a user study involving 24 users, intended to better understanding the challenges ahead. Our volunteers (in couples) evaluated our media sharing application prototype, providing valuable feedback with regards to three key challenges: metaphor, digital ecosystem, and level of control. First, users appreciated the selected metaphor of physical photos, but without relinquishing software support, such as alignment and distribution of media items. Second, vertical auxiliary screens helped in supporting a bigger number of users and providing more comfort and a better viewing angle and stance. Third, the nature of the task (either storytelling or random exploration) had a strong influence on the control capabilities to be provided by the application. Fourth, personal spaces within the tabletop were useful for allowing independent navigation. We consider these results as relevant for the future developments of home media sharing applications for the living room.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.