The key prerequisite for experience-driven design is to define what experience to design for. User experience (UX) goals concretise the intended experience. Based on our own case studies from industrial environments and a literature study, we propose five different approaches to acquiring insight and inspiration for UX goal setting: Brand, Theory, Empathy, Technology, and Vision. Each approach brings in a different viewpoint, thus supporting the multidisciplinary character of UX. The Brand approach ensures that the UX goals are in line with the company's brand promise. The Theory approach utilises the available scientific knowledge of human behaviour. The Empathy approach focuses on knowing the actual users and stepping into their shoes. The Technology approach considers the new technologies that are being introduced and their positive or negative influence on UX. Finally, the Vision approach focuses on renewal, introducing new kinds of UXs. In the design of industrial systems, several stakeholders are involved and they should share common design goals. Using the different UX goal-setting approaches together brings in the viewpoints of different stakeholders, thus committing them to UX goal setting and emphasising UX as a strategic design decision.
COVID-19 pandemic has affected the entire world in many ways. It has sparked a prominent pedagogical shift for university level students, as it has changed the way students learn, attend classes, or communicate with teachers. Globally, every student is forced to adopt Emergency Remote Learning (ERL) as a result of immediate transformation of physical classes into remote education. This two-fold study investigated the differences between traditional distance, online, and virtual learning solutions and the new Emergency Remote Learning (ERL) method for the university level education. Furthermore, a pragmatic mix-method study is conducted in the form of surveys, semi-structured interviews, and diary study spanning across 10 months of pandemic, to examine self-reported insights on ERL challenges, experiences, and learning engagement of the students from Finland and India. Cumulative findings suggest that scheduling, distractions, pessimistic emotions, longer durations, and concentration were the highest challenges faced by the students which impacted their learning experiences and engagement. The study also found that the ERL specific factors like low-interactivity, technical limitations, non-structured, and non-standardized methods had a prominent impact on the effectiveness of remote education. Furthermore, the study has suggested guidelines for improving remote learning experience as a futuristic solution beyond COVID-19 pandemic.
Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. In this paper we present how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. We present concrete system architectures, virtual, physical and mobile multimodal interfaces, and interaction management techniques for such companions. In particular, we present how knowledge representation and separation of low-level interaction modelling from high-level reasoning at the domain level makes it possible to implement distributed, but still coherent, interaction with Companions. The distribution is enabled by using a dialogue plan to communicate information from domain level planner to dialogue management and from there to a separate mobile interface. The model enables each part of the system to handle the same information from its own perspective without containing overlapping * Corresponding author Email addresses: mturunen@cs.uta.fi (Markku Turunen), jh@cs.uta.fi (Jaakko Hakulinen), olovs@sics.se (Olov Ståhl), gamback@sics.se (Björn Gambäck), preben@sics.se (Preben Hansen), mcrg@tid.es (Mari C. Rodríguez Gancedo), e.rsai@tid.es (Raúl Santos de la Cámara), c.g.smith@tees.ac.uk 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 logic, and makes it possible to separate task-specific and conversational dialogue management from each other. In addition to technical descriptions, we present results from the first evaluations of the Companions interfaces.
Abstract-Mid-air gestures have been largely overlooked for transferring content between large displays and personal mobile devices. To fully utilize the ubiquitous nature of mid-air gestures for this purpose, we developed SimSense, a smart space system which automatically pairs users with their mobile devices based on location data. Users can then interact with a gesturecontrolled large display, and move content onto their handheld devices. We investigated two mid-air gestures for content transfer, grab-and-pull and grab-and-drop, in a user study. Our results show that i) mid-air gestures are well suited for content retrieval scenarios and offer an impressive user experience, ii) grab-and-pull is preferred for scenarios where content is transferred to the user, whereas grab-and-drop is presumably ideal when the recipient is another person or a device, and iii) distinct gestures can be successfully combined with common point-and-dwell mechanics prominent in many gesture-controlled applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.