Social comparison-based features are widely used in social computing apps. However, most existing apps are not grounded in social comparison theories and do not consider individual differences in social comparison preferences and reactions. This paper is among the first to automatically personalize social comparison targets. In the context of an m-health app for physical activity, we use artificial intelligence (AI) techniques of multi-armed bandits. Results from our user study (n=53) indicate that there is some evidence that motivation can be increased using the AI-based personalization of social comparison. The detected effects achieved small-to-moderate effect sizes, illustrating the real-world implications of the intervention for enhancing motivation and physical activity. In addition to design implications for social comparison features in social apps, this paper identified the personalization paradox, the conflict between user modeling and adaptation, as a key design challenge of personalized applications for behavior change. Additionally, we propose research directions to mitigate this Personalization Paradox.
The advent of artificial intelligence (AI) and machine learning (ML) bring human-AI interaction to the forefront of HCI research. This paper argues that games are an ideal domain for studying and experimenting with how humans interact with AI. Through a systematic
Reflection is a critical aspect of the learning process. However, educational games tend to focus on supporting learning concepts rather than supporting reflection. While reflection occurs in educational games, the educational game design and research community can benefit from more knowledge of how to facilitate player reflection through game design. In this paper, we examine educational programming games and analyze how reflection is currently supported. We find that current approaches prioritize accuracy over the individual learning process and often only support reflection post-gameplay. Our analysis identifies common reflective features, and we develop a set of open areas for future work. We discuss these promising directions towards engaging the community in developing more mechanics for reflection in educational games.
Understanding players' mental models are crucial for game designers who wish to successfully integrate player-AI interactions into their game. However, game designers face the difficult challenge of anticipating how players model these AI agents during gameplay and how they may change their mental models with experience. In this work, we conduct a qualitative study to examine how a pair of players develop mental models of an adversarial AI player during gameplay in the multiplayer drawing game iNNk. We conducted ten gameplay sessions in which two players (n = 20, 10 pairs) worked together to defeat an AI player. As a result of our analysis, we uncovered two dominant dimensions that describe players' mental model development (i.e., focus and style). The first dimension describes the focus of development which refers to what players pay attention to for the development of their mental model (i.e., top-down vs. bottom-up focus). The second dimension describes the differences in the style of development, which refers to how players integrate new information into their mental model (i.e., systematic vs. reactive style). In our preliminary framework, we further note how players process a change when a discrepancy occurs, which we observed occur through comparisons (i.e., compare to other systems, compare to gameplay, compare to self). We offer these results as a preliminary framework for player mental model development to help game designers anticipate how different players may model adversarial AI players during gameplay.
Designing human-centered AI-driven applications require deep understandings of how people develop mental models of AI. Currently, we have little knowledge of this process and limited tools to study it. This paper presents the position that AI-based games, particularly the player-AI interaction component, offer an ideal domain to study the process in which mental models evolve. We present a case study to illustrate the benefits of our approach for explainable AI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.