Ethics of Artificial Intelligence 2020
DOI: 10.1093/oso/9780190905033.003.0017
|View full text |Cite
|
Sign up to set email alerts
|

Designing AI with Rights, Consciousness, Self-Respect, and Freedom

Abstract: This chapter proposes four policies of ethical design of human-grade AI. Two of the policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, the chapter argues that we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. The other two policies concern respect and freedom. The chapter argues that if… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 29 publications
(4 reference statements)
0
8
0
Order By: Relevance
“…our other justmentioned example of maximally humanlike robots with high degrees of artificial intelligence (e.g. Danaher, 2020;Schwitzgebel & Garza, 2020). We are not yet living in a world featuring such technologies.…”
Section: Examples Of Ethical Debates That Can Be Viewed As Part Of Th...mentioning
confidence: 99%
“…our other justmentioned example of maximally humanlike robots with high degrees of artificial intelligence (e.g. Danaher, 2020;Schwitzgebel & Garza, 2020). We are not yet living in a world featuring such technologies.…”
Section: Examples Of Ethical Debates That Can Be Viewed As Part Of Th...mentioning
confidence: 99%
“…4 These authors typically focus on AI systems in the form of (humanoid) robots, rather than software agents or computer programs. Philosophers like Mark Coeckelbergh [43], David Gunkel [44], John Danaher [45], Janina Loh [46], Eric Schwitzgebel and Mara Garza [47,48], and Chris Wareham [49] argue that some robots are or might become moral persons, to whom we owe some degree of moral consideration.…”
Section: The Value Of Control and Types Of Ai Agency Part Ii: Humanoi...mentioning
confidence: 99%
“…Personally, I am skeptical about the idea that any robots might have, or come to have, properties or abilities that would genuinely make them into full moral persons. In particular, I am skeptical about the idea-which Schwitzgebel and Garza [47,48] take very seriously-that robots might come to have humanlike minds, with humanlike consciousness and emotions. But I grant them that if robots could come to have such minds, then they would potentially, for this reason, become full moral persons to whom we owe the same form of moral consideration that we owe to our fellow human beings.…”
Section: The Value Of Control and Types Of Ai Agency Part Ii: Humanoi...mentioning
confidence: 99%
“…Though they may seem far‐fetched at present, we should consider the possibility of 1 day encountering beings like this – or intentionally or unintentionally creating something like them with future advances in AI or bioengineering (cf. Liao, 2020; Schwitzgebel and Garza, 2015, 2020). For understanding our ethical obligations to a strange or alien mind, should our guiding question be about the presence or absence or consciousness per se , or specifically the presence or absence of conscious pleasure and suffering?…”
Section: What Is Sentientism?mentioning
confidence: 99%