Crowd sourced mobile microtasking represents a significant opportunity in emerging economies such as India, that are characterized by the high levels of mobile phone penetration and large numbers of educated people that are unemployed or underemployed. Indeed, mobile phones have been used successfully in many parts of the world for microtasking, primarily for crowd sourced data collection, and text or image based tasks. More complex tasks such as annotation of multimedia such as audio or video have traditionally been confined to desktop interfaces. With the rapid evolution in the multimedia capabilities of mobile phones in these geographies, we believe that the nature of microtasks carried out on these devices, as well as the design of interfaces for such microtasks, warrants investigation.In this paper we explore the design of mobile phone interfaces for a set of multimedia-based microtasks on feature phones, which represent the vast majority of multimedia-capable mobile phones in these geographies. As part of an initial study using paper prototypes, we evaluate three types of multimedia content: images, audio and video, and three interfaces for data input: Direct Entry, Scroll Key Input and Key Mapping. We observe that while there are clear interface preferences for image and audio tasks, the user preference for video tasks varies based on the `task complexity' -the `density' of data the annotator has to deal with. In a second study, we prototype two different interfaces for videobased annotation tasks -a single screen input method, and a two screen phased interface. We evaluate the two interface designs and the three data input methods studied earlier by means of a user study with 32 participants. Our findings show that where less dense data was concerned; participants prefer Key Mapping as the input technique. For dense data, while participants prefer Key Mapping, our data shows that the accuracy of data input with Key Mapping is significantly lower than that with Scroll Key Input. The study also provides insight into the game plan each user develops and employs to input data. We believe these findings will enable other researchers to build effective user interfaces for mobile microtasks, and be of value to UI developers, HCI researchers and microtask designers.
Most work in the space of multimodal and gestural interaction has focused on single user productivity tasks. The design of multimodal, freehand gestural interaction for multiuser lean-back scenarios is a relatively nascent area that has come into focus because of the availability of commodity depth cameras. In this paper, we describe our approach to designing multimodal gestural interaction for multiuser photo browsing in the living room, typically a shared experience with friends and family. We believe that our learnings from this process will add value to the efforts of other researchers and designers interested in this design space.
No abstract
Distance education (DE) today is mostly in broadcast mode, where classes are broadcast over networks for students to consume. While the instruction mode has remained close to the physical classroom metaphor, what is often lacking is the rich two way interaction that happens between students and the teacher and between the students in a physical classroom. In this paper, we describe the design of DESI, a virtual classroom system for DE. We consider the scenario wherein a student is attending a class at home on a PC, or an internet-connected TV, and explore the use of student and teacher interfaces that promote classroom interaction and integrate multimodal interactions to enable richer and more interactive virtual classroom experiences. We briefly describe the software architecture of the DESI system, and present preliminary results from testing an early version of the system with end users. Our work is relevant to distance education on TV broadcast networks, online classrooms, and enterprise collaboration and e-learning systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.