As statistical machine learning algorithms and techniques continue to mature, many researchers and developers see statistical machine learning not only as a topic of expert study, but also as a tool for software development. Extensive prior work has studied software development, but little prior work has studied software developers applying statistical machine learning. This paper presents interviews of eleven researchers experienced in applying statistical machine learning algorithms and techniques to human-computer interaction problems, as well as a study of ten participants working during a five-hour study to apply statistical machine learning algorithms and techniques to a realistic problem. We distill three related categories of difficulties that arise in applying statistical machine learning as a tool for software development: (1) difficulty pursuing statistical machine learning as an iterative and exploratory process, (2) difficulty understanding relationships between data and the behavior of statistical machine learning algorithms, and (3) difficulty evaluating the performance of statistical machine learning algorithms and techniques in the context of applications. This paper provides important new insight into these difficulties and the need for development tools that better support the application of statistical machine learning.
Virtual reality (VR) offers new possibilities for learning, specifically for training individuals to perform physical movements such as physical therapy and exercise. The current article examines two aspects of VR that uniquely contribute to media interactivity: the ability to capture and review physical behavior and the ability to see one's avatar rendered in real time from third person points of view. In two studies, we utilized a state-of-the-art, image-based tele-immersive system, capable of tracking and rendering many degrees of freedom of human motion in real time. In Experiment 1, participants learned better in VR than in a video learning condition according to self-report measures, and the cause of the advantage was seeing one's avatar stereoscopically in the third person. In Experiment 2, we added a virtual mirror in the learning environment to further leverage the ability to see oneself from novel angles in real time. Participants learned better in VR than in video according to objective performance measures. Implications for learning via interactive digital media are discussed. Interactivity and Learning in Virtual Reality 355Historically, virtual reality (VR) learning environments have been applied to a multitude of learning scenarios, from flight simulation (Hays, Jacobs, Prince, & Salas, 1992) to medical training (Berkley, Turkiyyah, Berg, Ganter, & Weghorst, 2004) to classroom learning (Pantelidis, 1993). One of the most exciting aspects of VR is its ability to leverage interactivity. Virtual systems offer a novel, flexible environment with affordances not possible from previous mediums like video and text (Blascovich et al., 2002). These virtual environments offer unique opportunities for learning on-demand (Trondsen & Vickery, 1997), customization and personalization (Kalyanaraman & Sundar, 2006), and feedback mechanisms (Lee & Nass, 2005). Previous research has shown that on-demand learning provides an advantage over face-to-face human interaction (Trondsen & Vickery, 1997). In a variety of contexts, VR offers possibilities to extend the notion of interactive learning in ways not possible through face-to-face interaction (see Bailenson et al., 2008, for a review of research on learning in VR).The current studies measured the effects of learning physical tasks from a virtual system when compared to video, leveraging features such as threedimensional depth cues, representations of the participant next to the instructor, and changes of scene angle not possible through traditional video representations. INTERACTIVITY IN MEDIAAs Sundar and Nass (2000) point out, digital technology has drastically changed the way in which communication occurs; audiences, typically referred to as passive receivers, have now become more active in their media experience, often being referred to as ''users.'' Conceptual definitions of interactivity typically emphasize three dimensions: technology, process, and user. Proponents of the technology dimension argue that interactivity is an affordance of technology (Steuer, 1...
Conversations are characterized by an interactional synchrony between verbal and nonverbal behaviors [Kendon, A. (1970). Movement coordination in social interaction: some examples described. Acta Psychologica, 32(2), . A subset of these contingent conversational behaviors is direct mimicry. During face to face interaction, people who mimic the verbal [Giles, H., Coupland, J., & Coupland, N. (1991) Most research examining mimicry behavior in interaction examines 'implicit mimicry' in which the mimicked individual is unaware of the behavior of the mimicker. In this paper, we examined how effective people were at explicitly detecting mimicking computer agents and the consequences of mimic detection in terms of social influence and interactional synchrony. In Experiment 1, participant pairs engaged in a ''one-degree of freedom'' Turing Test. When the computer agent mimicked them, users were significantly worse than chance at identifying the other human. In Experiment 2, participants were more likely to detect mimicry in an agent that mirror-mimicked their head movements (three degrees of freedom) than agents that either congruently mimicked their behaviors or mimicked those movements on another rotational axis. We discuss implications for theories of interactivity.
We have limited understanding of how older adults use smartphones, how their usage differs from younger users, and the causes for those differences. As a result, researchers and developers may miss promising opportunities to support older adults or offer solutions to unimportant problems. To characterize smartphone usage among older adults, we collected iPhone usage data from 84 healthy older adults over three months. We find that older adults use fewer apps, take longer to complete tasks, and send fewer messages. We use cognitive test results from these same older adults to then show that up to 79% of these differences can be explained by cognitive decline, and that we can predict cognitive test performance from smartphone usage with 83% ROCAUC. While older adults differ from younger adults in app usage behavior, the "cognitively young" older adults use smartphones much like their younger counterparts. Our study suggests that to better support all older adults, researchers and developers should consider the full spectrum of cognitive function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.