2020
DOI: 10.3389/fpsyg.2020.561510
|View full text |Cite
|
Sign up to set email alerts
|

Personalizing Human-Agent Interaction Through Cognitive Models

Abstract: Cognitive modeling of human behavior has advanced the understanding of underlying processes in several domains of psychology and cognitive science. In this article, we outline how we expect cognitive modeling to improve comprehension of individual cognitive processes in human-agent interaction and, particularly, human-robot interaction (HRI). We argue that cognitive models offer advantages compared to data-analytical models, specifically for research questions with expressed interest in theories of cognitive f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 53 publications
0
6
0
Order By: Relevance
“…The usefulness of the specific modeling approach used here is probably specific to binary decision-making scenarios that typically occur in highly structured environments like roads. Alternative approaches of cognitive modeling of human behavior in human-agent interaction exist that are less situationspecific (Thomaz et al, 2016;Hiatt et al, 2017), although integration of cognitive models into computational frameworks for interaction planning remains an open problem (Ho and Griffiths, 2022;Schürmann and Beckerle, 2020). Yet, we believe research of this kind will be instrumental in enabling agents to have appropriate representations of humans around them, which is critical for responsible development and deployment of artificial agents in the real world (Cavalcante Siebert et al, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…The usefulness of the specific modeling approach used here is probably specific to binary decision-making scenarios that typically occur in highly structured environments like roads. Alternative approaches of cognitive modeling of human behavior in human-agent interaction exist that are less situationspecific (Thomaz et al, 2016;Hiatt et al, 2017), although integration of cognitive models into computational frameworks for interaction planning remains an open problem (Ho and Griffiths, 2022;Schürmann and Beckerle, 2020). Yet, we believe research of this kind will be instrumental in enabling agents to have appropriate representations of humans around them, which is critical for responsible development and deployment of artificial agents in the real world (Cavalcante Siebert et al, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…A unified modeling framework could be a step toward prediction of factors improving embodiment of artificial limbs and could thus improve user experience. The authors of the present article propose a 2-fold extension of the current modeling approaches in accordance with the upper two levels of cognitive modeling proposed by Marr ( 1982 ), which has been proposed in earlier research to describe the different underlying task of modeling approaches (e.g., Schürmann and Beckerle, 2020 ; Shams and Beierholm, 2021 ). Firstly, starting on the computational theory level, we propose to improve the current model structure, and extend the models for structurally varying bodies taking into account individual differences in perception of embodiment.…”
Section: Introductionmentioning
confidence: 94%
“…For the AI agents to have appropriate representations of human agents, the assumptions about human intentions and behavior adopted by AI agents (either implicitly or explicitly) need to be validated. This can be aided by incorporating theoretically grounded and empirically validated models of humans in the interaction-planning algorithms of AI agents [69,70], or by augmenting bottom-up, machinelearned representations with top-down symbolic representations [71,72]. An alternative approach, value alignment [73], aims to mitigate the problems that arise when autonomous systems operate with inappropriate objectives.…”
Section: Practical Considerationsmentioning
confidence: 99%