2014
DOI: 10.1016/j.cognition.2013.11.011
|View full text |Cite
|
Sign up to set email alerts
|

Inferring the intentional states of autonomous virtual agents

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
28
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 28 publications
(28 citation statements)
references
References 27 publications
0
28
0
Order By: Relevance
“…This study differs in several ways from previous of studies of the accuracy of inferences of intention (Baker, Saxe, & Tenenbaum, 2009; Pantelis et al, 2014). First, other studies used simulated agents that did not necessarily produce ecologically valid examples of intentional behavior.…”
Section: Discussionmentioning
confidence: 65%
“…This study differs in several ways from previous of studies of the accuracy of inferences of intention (Baker, Saxe, & Tenenbaum, 2009; Pantelis et al, 2014). First, other studies used simulated agents that did not necessarily produce ecologically valid examples of intentional behavior.…”
Section: Discussionmentioning
confidence: 65%
“…Previous studies have modeled the computational mechanisms of the human mind-reading ability as a Bayesian inference ( Fig. 2 left) [1,8,14]. In this paper, we collectively call these approaches the Bayesian Theory of Mind (BToM).…”
Section: Bayesian Modeling Of Theory Of Mindmentioning
confidence: 99%
“…Computational accounts of top–down reasoning about intentional agents in simple animated displays have modeled how the structure of agents' actions and of the situational context shape conceptually rich mental state inferences, such as attribution of goals to individual (Baker, Saxe, & Tenenbaum, ; see Fig. C) and interactive (Baker, Goodman, & Tenenbaum, ; Pantelis et al, ; Ullman et al, ) agents, or attribution of beliefs and desires (Baker, Saxe, & Tenenbaum, ). These accounts formalize the “principle of rationality” from philosophy (Dennett, ) and developmental psychology (Gergely, Nádasdy, Csibra, & Bíró, )—the assumption that intentional agents will act rationally to achieve their desires, given their beliefs about the world—in terms of probabilistic models of agents' rational belief‐, desire‐, goal‐, and context‐dependent action planning, based on accounts of rational utility‐theoretic planning from AI and economics.…”
Section: Introductionmentioning
confidence: 99%