2021
DOI: 10.1007/s43681-021-00060-5
|View full text |Cite
|
Sign up to set email alerts
|

Socio-cognitive biases in folk AI ethics and risk discourse

Abstract: The ongoing conversation on AI ethics and politics is in full swing and has spread to the general public. Rather than contributing by engaging with the issues and views discussed, we want to step back and comment on the widening conversation itself. We consider evolved human cognitive tendencies and biases, and how they frame and hinder the conversation on AI ethics. Primarily, we describe our innate human capacities known as folk theories and how we apply them to phenomena of different implicit categories. Th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 146 publications
(173 reference statements)
0
2
0
Order By: Relevance
“…In such situations, there may be no “good” option, but rather an actor must select between multiple options none of which are “morally-flawless” (Misselhorn, 2022 ). It is important to use the term “select” over terms such as “think” to avoid anthropomorphizing of AI—because regardless of one's views regarding the feasibility of creating a “conscious” machine in the long-term (see Dignum, 2017 ; Laakasuo et al, 2021 ), it is hard to challenge the fact that machines will be placed in situations where they must ingest data and act. Human understanding and acceptance of ethical behavior will shape how well AI is adopted in society, “…for the wider public to accept the proliferation of artificial intelligence-driven vehicles on their roads, both groups will need to understand the origins of the ethical principles that are programmed into these vehicles” (Awad et al, 2018 , p. 64).…”
Section: Introductionmentioning
confidence: 99%
“…In such situations, there may be no “good” option, but rather an actor must select between multiple options none of which are “morally-flawless” (Misselhorn, 2022 ). It is important to use the term “select” over terms such as “think” to avoid anthropomorphizing of AI—because regardless of one's views regarding the feasibility of creating a “conscious” machine in the long-term (see Dignum, 2017 ; Laakasuo et al, 2021 ), it is hard to challenge the fact that machines will be placed in situations where they must ingest data and act. Human understanding and acceptance of ethical behavior will shape how well AI is adopted in society, “…for the wider public to accept the proliferation of artificial intelligence-driven vehicles on their roads, both groups will need to understand the origins of the ethical principles that are programmed into these vehicles” (Awad et al, 2018 , p. 64).…”
Section: Introductionmentioning
confidence: 99%
“…People are also reluctant to recommend computers as medical practitioners for others when those individuals are described as having unique medical conditions. Similarly, people may view machines as comparatively inflexible -they might believe that computers are capable of doing only what they have been programmed to do (Laakasuo et al, 2021; also see Kim & Duhachek 2020).…”
Section: Introductionmentioning
confidence: 99%