2022
DOI: 10.1145/3549482
|View full text |Cite
|
Sign up to set email alerts
|

"I Want To See How Smart This AI Really Is": Player Mental Model Development of an Adversarial AI Player

Abstract: Understanding players' mental models are crucial for game designers who wish to successfully integrate player-AI interactions into their game. However, game designers face the difficult challenge of anticipating how players model these AI agents during gameplay and how they may change their mental models with experience. In this work, we conduct a qualitative study to examine how a pair of players develop mental models of an adversarial AI player during gameplay in the multiplayer drawing game iNNk. We conduct… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 49 publications
0
3
0
Order By: Relevance
“…For example, researchers can investigate the use of thought bubbles for diegetic elicitation through non-verbal queries such as card sorting [63], or diagrammatic representations [77]. In addition, recent advances in AI have inspired many researchers to study human-AI interaction [1,86] and investigate mental models of AI [40,92]. Thought bubbles can be utilized to advance our understanding of users' mental model development of AI over time.…”
Section: Discussionmentioning
confidence: 99%
“…For example, researchers can investigate the use of thought bubbles for diegetic elicitation through non-verbal queries such as card sorting [63], or diagrammatic representations [77]. In addition, recent advances in AI have inspired many researchers to study human-AI interaction [1,86] and investigate mental models of AI [40,92]. Thought bubbles can be utilized to advance our understanding of users' mental model development of AI over time.…”
Section: Discussionmentioning
confidence: 99%
“…For instance, previous ethnographic studies in HCI have employed conversation analysis to reveal how errors of virtual assistants such as Alexa become a source of humor when they "fail" during family dinner time conversations [72]. Game designers have engaged human players to detect ML errors as a form of playful experience [89,98]. More broadly, researchers have found that exposing ML errors to users is helpful for them to develop accurate mental models of the system [38,89].…”
Section: Dependabilitymentioning
confidence: 99%
“…Game designers have engaged human players to detect ML errors as a form of playful experience [89,98]. More broadly, researchers have found that exposing ML errors to users is helpful for them to develop accurate mental models of the system [38,89]. Therefore, designers should use these AI artworks as a starting point to broaden the design space of how ML errors are exposed to users and how the ML system can recover from them.…”
Section: Dependabilitymentioning
confidence: 99%