2020
DOI: 10.1080/21507740.2020.1740350
|View full text |Cite
|
Sign up to set email alerts
|

Anthropomorphism in AI

Abstract: AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI's functionalities and innovations are often anthropomorphized (i.e., described and conceived as characterized by human traits). The genera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
58
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 126 publications
(77 citation statements)
references
References 33 publications
(47 reference statements)
0
58
0
Order By: Relevance
“…It has been theorized that the more human-like an AI agent is, the more likely humans are to trust and accept it [72]. However, there are concerns that overanthropomorphism may lead to overestimation of the AI's capabilities, potentially putting the stakeholder at risk [73], damaging trust [74], and leading to a host of ethical and psychological concerns, including manipulation [75].…”
Section: Anthropomorphism and Embodimentmentioning
confidence: 99%
“…It has been theorized that the more human-like an AI agent is, the more likely humans are to trust and accept it [72]. However, there are concerns that overanthropomorphism may lead to overestimation of the AI's capabilities, potentially putting the stakeholder at risk [73], damaging trust [74], and leading to a host of ethical and psychological concerns, including manipulation [75].…”
Section: Anthropomorphism and Embodimentmentioning
confidence: 99%
“…Philosophers and computer scientists have repeatedly cautioned against adopting psychological language towards artificially intelligent systems, as this can lead to "premature conclusions of ethical or legal significance" [5, p. 166-7] [5]- [7]. Differently put, since (on most views) moral agency requires the capacity for inculpating mental states, postulating the latter for AI systems might engender the mistaken inference that they can be moral (and legal) agents [8], [9].…”
Section: Introductionmentioning
confidence: 99%
“…A second obstacle is the phenomenon of anthropomorphizing AI systems. With the recent boom of suprahuman performance on such tasks as Atari games ( Mnih et al., 2015 ), Go ( Silver et al., 2016 ), and lung cancer detection ( Ardila et al., 2019 ), we have seen a proliferation of the anthropomorphization of AI in the media ( Proudfoot, 2011 ; Watson, 2019 ; Salles et al., 2020 ). This has been exacerbated by the ML literature itself ( Lipton and Steinhardt, 2018 ), where many ML tasks and techniques are described using the same language we would use for a human doing the task—sreading comprehension ( Hermann et al, 2015 ), music composition ( Mozer, 1994 ), curiosity ( Schmidhuber, 1991 ), fear ( Lipton et al., 2016 ), “thought” vectors ( Kiros et al, 2015 ), and “consciousness” priors ( Bengio, 2017 ).…”
Section: Introductionmentioning
confidence: 99%