Digital media content can contain items that are very personal and valuable for their owner. Such items can form life memories, such as media collages from happy events or recordings of the first steps of one's children. Memories can be evoked and created "anytime, anyplace", and thus mobility is a key factor in managing them. Even though related systems for sharing photographs exist, users' needs for managing personal content have not been investigated specifically from the viewpoint of life memories. This paper describes our empirical research on users' needs for sharing the digital representations of their life memories. As the main contribution, we present design guidelines for services for sharing digital life memories. Furthermore, we present a mobile service prototype which was designed based on the guidelines. Our research shows that the creation, sharing, managing and viewing of digital life memories is highly based on meaningful real-life events.
Home environment is an exciting application domain for multimodal mobile interfaces. Instead of multiple remote controls, personal mobile devices could be used to operate home entertainment systems. This paper reports a subjective evaluation of multimodal inputs and outputs for controlling a home media center using a mobile phone. A within-subject evaluation with 26 participants revealed significant differences on user expectations on and experiences with different modalities. Speech input was received extremely well, even surpassing expectations in some cases, while gestures and haptic feedback were almost failing to meet the lowest expectations. The results can be applied for designing similar multimodal applications in home environments.
We present a multimodal media center interface designed for blind and partially sighted people. It features a zooming focus-plus-context graphical user interface coupled with speech output and haptic feedback. A multimodal combination of gestures, key input, and speech input is utilized to interact with the interface. The interface has been developed and evaluated in close cooperation with representatives from the target user groups. We discuss the results from longitudinal evaluations that took place in participants' homes, and compare the results to other pilot and laboratory studies carried out previously with physically disabled and nondisabled users.
Abstract. We present a multimodal media center interface based on speech input, gestures, and haptic feedback (hapticons). In addition, the application includes a zoomable context + focus GUI in tight combination with speech output. The resulting interface is designed for and evaluated with different user groups, including visually and physically impaired users. Finally, we present the key results from its user evaluation and public pilot studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.