2019
DOI: 10.1080/09515089.2019.1688778
|View full text |Cite
|
Sign up to set email alerts
|

Adopting the intentional stance toward natural and artificial agents

Abstract: In our daily lives, we need to predict and understand others' behaviour in order to navigate through our social environment. Predictions concerning other humans' behaviour usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called adoption of the intentional stance. In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture and human-robot interaction. We propose that… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
42
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 69 publications
(43 citation statements)
references
References 74 publications
(78 reference statements)
1
42
0
Order By: Relevance
“…[57]). Humans infer mental states of robots as they do in humans so long as the robots' social cues are similar [60] via "social attunement" [61]. More direct mind ascription (i.e., willful acknowledgement of agent mindedness) is distinct and often divergent from preconscious mentalizing, likely because it requires elaborative processing that invokes agent-category heuristics [60].…”
Section: Contributions Of Domain-specific Moral Evaluations To Socialmentioning
confidence: 99%
“…[57]). Humans infer mental states of robots as they do in humans so long as the robots' social cues are similar [60] via "social attunement" [61]. More direct mind ascription (i.e., willful acknowledgement of agent mindedness) is distinct and often divergent from preconscious mentalizing, likely because it requires elaborative processing that invokes agent-category heuristics [60].…”
Section: Contributions Of Domain-specific Moral Evaluations To Socialmentioning
confidence: 99%
“…Nevertheless, psychological and sociological investigations show that people are willing to attribute rich mental states to currently existing AI systems [1], including foreknowledge of bad outcomes [10] and intentions to deceive [11]. They are also willing to treat such systems as blameworthy [2]- [4], [12], [13].…”
Section: Introductionmentioning
confidence: 99%
“…As Malle and colleagues have shown, anthropomorphic AI systems are treated more like humans than their mechanical-looking counterparts, as far as morality is concerned ( [14], see also instanceproject.eu). Perhaps people are more willing to blame anthropomorphic AI systems because looking human naturally leads people to ascribe mental traits [1], [8]. Going beyond inferences based on the physical appearance of the robot, we decided to target the connection between the perceived capacity for inculpating mental states and moral evaluations directly.…”
Section: Introductionmentioning
confidence: 99%
“…However, the social-cognitive mechanisms promoting mentalizing of robots may break down under some conditions: When elaboration on the agent's behavior makes salient the machineontological status of the robot (Banks, 2020c), when such salience activates mental models including expectations of mindlessness of machines (cf., Perez-Osario & Wykowska, 2020;Thellman et al, 2020), or when a robot's social cues are uninterpretable (Banks, 2020c). Moreover, the manner in which mentalizing is inferred by researchers is of particular importance, as verbal and direct metrics (i.e., self-reports) do not always comport with nonverbal and indirect measures (i.e., behavioral indicators; Banks, 2020c;Thellman et al, 2020), likely because they are, respectively, associated with explicit/logical and implicit/intuitive processes, respectively (Takahashi et al, 2013).…”
Section: Review Of Literature Tom In Human-robot Interactionmentioning
confidence: 99%