Figure 1: Scatter plots depicting the relation of pass rate (a measure of level difficulty) and churn rate over 168 game levels of Angry Birds Dream Blast, in both real player data and our simulations. Here, churn is defined as not playing for 7 days. The colors denote level numbers. The baseline simulation model predicts pass rate and churn directly from AI gameplay. Our proposed extended model augments this with a simulation of how the player population evolves over the levels.
This paper presents a review of intrinsic motivation in player modeling, with a focus on simulation-based game testing. Modern AI agents can learn to win many games; from a game testing perspective, a remaining research problem is how to model the aspects of human player behavior not explained by purely rational and goal-driven decision making. A major piece of this puzzle is constituted by intrinsic motivations, i.e., psychological needs that drive behavior without extrinsic reinforcement such as game score. We first review the common intrinsic motivations discussed in player psychology research and artificial intelligence, and then proceed to systematically review how the various motivations have been implemented in simulated player agents. Our work reveals that although motivations such as competence and curiosity have been studied in AI, work on utilizing them in simulation-based game testing is sparse, and other motivations such as social relatedness, immersion, and domination appear particularly underexplored.
This paper presents a novel approach to automated playtesting for the prediction of human player behavior and experience. We have previously demonstrated that Deep Reinforcement Learning (DRL) game-playing agents can predict both game difficulty and player engagement, operationalized as average pass and churn rates. We improve this approach by enhancing DRL with Monte Carlo Tree Search (MCTS). We also motivate an enhanced selection strategy for predictor features, based on the observation that an AI agent's best-case performance can yield stronger correlations with human data than the agent's average performance. Both additions consistently improve the prediction accuracy, and the DRL-enhanced MCTS outperforms both DRL and vanilla MCTS in the hardest levels. We conclude that player modelling via automated playtesting can benefit from combining DRL and MCTS. Moreover, it can be worthwhile to investigate a subset of repeated best AI agent runs, if AI gameplay does not yield good predictions on average.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.