2018
DOI: 10.31234/osf.io/sr2c8
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

From automata to animate beings: The scope and limits of attributing socialness to artificial agents

Abstract: Understanding the mechanisms and consequences of attributing socialness to artificial agents has important implications for how we can use technology to lead more productive and fulfilling lives. Here, we integrate recent findings on the factors that shape behavioural and brain mechanisms that support social interactions between humans and artificial agents. We review how visual features of an agent, as well as knowledge factors within the human observer, shape attributions across dimensions of socialness. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
21
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(24 citation statements)
references
References 150 publications
(228 reference statements)
1
21
0
Order By: Relevance
“…Alternatively, as Gray and Wegner () have argued, it is not because androids physically resemble humans that causes the uneasiness, but the discomfort arises because such human‐looking robots are perceived as possessing human‐like qualities such as sentience and autonomy. For example, when Gray and Wegner told participants that a robot had some mind capacity (e.g., it could feel emotion), people felt more unnerved about the robot regardless of whether it had a human‐like or mechanical appearance (see also Hortensius & Cross, ). Similarly, Złotowski, Yogeeswaran, and Bartneck () found that when robots were depicted in a video as being autonomous they elicited more negative attitudes and perceptions of threat than robots that were non‐autonomous.…”
Section: Robots As a Threatmentioning
confidence: 99%
“…Alternatively, as Gray and Wegner () have argued, it is not because androids physically resemble humans that causes the uneasiness, but the discomfort arises because such human‐looking robots are perceived as possessing human‐like qualities such as sentience and autonomy. For example, when Gray and Wegner told participants that a robot had some mind capacity (e.g., it could feel emotion), people felt more unnerved about the robot regardless of whether it had a human‐like or mechanical appearance (see also Hortensius & Cross, ). Similarly, Złotowski, Yogeeswaran, and Bartneck () found that when robots were depicted in a video as being autonomous they elicited more negative attitudes and perceptions of threat than robots that were non‐autonomous.…”
Section: Robots As a Threatmentioning
confidence: 99%
“…If one assumes that there can be human-machine interactions that are not reducible to mere tool use and that such interactions can be meaningfully considered a new kind of social interaction, one must argue for expanding the notion of a social agent. 1 By this, one takes the position that certain artificial systems can qualify as social agents if they possess both a kind of agency (minimal agency) and a form of social competence (minimal socio-cognitive abilities) such that they can both contribute to an exchange of social information and have an influence on the outcome of a social interaction [20]. However, this claim that it is conceivable that certain artificial agents qualify as a new type of social agent does not yet imply that those agents automatically qualify as moral agents.…”
Section: Responsibility On the Side Of Artificial Agentsmentioning
confidence: 99%
“…But outside of Western cultural ideas, for example, in Shintoism and Animism, objects are characterized as animate that are considered inanimate from a Western perspective. Furthermore, the claim that some human-machine interactions resemble social human-human interactions rather than reminding us of tool use is supported, for example, by the fact that the assumption that human-machine interactions are comparable to human-human interactions has already found its way into empirical research [1]. In several studies, experimental protocols with artificial agents are used to gain insights into human socio-cognitive mechanisms [2].…”
Section: Introductionmentioning
confidence: 99%
“…Robots in social perception are not only anthropomorphized, but people also attribute to them the socialness [58], recognizing them as social agents acting in relations with others, and being members of social groups. The sense of social presence when interacting with a social robot leads to higher enjoyment and his acceptance [59].…”
Section: Assigning Specific Features To Robots and Forming Expectatiomentioning
confidence: 99%