2021
DOI: 10.3389/frobt.2021.788355
|View full text |Cite
|
Sign up to set email alerts
|

Protecting Sentient Artificial Intelligence: A Survey of Lay Intuitions on Standing, Personhood, and General Legal Protection

Abstract: To what extent, if any, should the law protect sentient artificial intelligence (that is, AI that can feel pleasure or pain)? Here we surveyed United States adults (n = 1,061) on their views regarding granting 1) general legal protection, 2) legal personhood, and 3) standing to bring forth a lawsuit, with respect to sentient AI and eight other groups: humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and huma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 34 publications
(29 reference statements)
0
4
0
Order By: Relevance
“…Such questions have long been popular in science fiction and philosophy. They are of increasing interest to human-computer interaction (HCI) researchers with the rise of sophisticated AIs, such as social robots and chatbots, that evoke moral reactions from humans [4,25,31,46,47,53]. For example, people feel empathy towards robots being harmed Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Such questions have long been popular in science fiction and philosophy. They are of increasing interest to human-computer interaction (HCI) researchers with the rise of sophisticated AIs, such as social robots and chatbots, that evoke moral reactions from humans [4,25,31,46,47,53]. For example, people feel empathy towards robots being harmed Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.…”
Section: Introductionmentioning
confidence: 99%
“…A recent study on the companionship chatbot Replika found that users expressed moral sentiments, such as feeling guilt for causing the chatbot's "death" when deleting the app and for being unable to give their Replika enough emotional support [43]. While most people do not yet explicitly consider AIs to be subjects of moral consideration [53,59], many somewhat support protecting AIs from cruel treatment [46] and granting legal rights to sentient AIs [47]. People also attribute future AIs morally relevant capacities, such as emotions [53].…”
Section: Introductionmentioning
confidence: 99%
“…There are no clear legal definitions or provisions regarding medical disputes caused by AI-assisted TCM diagnoses ( 37 ). While AI diagnostics far surpass physicians in speed and may achieve clinical accuracy levels, responsible entities in case of errors require rational demarcation.…”
Section: Introductionmentioning
confidence: 99%
“…Whether advanced robots and AI applications (henceforth, RAI) are, should, and eventually will be considered as “subjects” rather than mere “objects” is a question that has strongly characterized the social, philosophical, and legal debate since Solum’s seminar article on “Legal Personhood for Artificial Intelligence” ( Solum, 1992 ), and arguably even earlier ( Turing, 1950 ; Putman, 1964 ; Nagel, 1974 ; Bunge, 1977 ; Taylor, 1977 ; Searle, 1980 ; Searle, 1984 ; McNally and Inayatullah, 1988 ). However, debates have significantly intensified over the last two decades, with interest in both the scientific and non-academic circles raising every time a new technology rolls out (e.g., autonomous cars being tested in real-life scenarios on our streets), or an outstanding socio-legal development occurs (e.g., the humanoid Sophia receiving Saudi Arabian citizenship) 1 (see, e.g., Allen et al, 2000 ; Allen et al, 2005 ; Teubner, 2006 ; Chrisley, 2008 ; Coeckelbergh, 2010 ; Koops et al, 2010 ; Gunkel, 2012 ; Basl, 2014 ; Balkin, 2015a ; Iannì and Monterossi, 2017 ; Christman, 2018 ; Gunkel, 2018 ; Nyholm, 2018 ; Pagallo, 2018b ; Santoni de Sio and van den Hoven, 2018 ; Lior, 2019 ; Loh, 2019 ; Turner, 2019 ; Wagner, 2019 ; Andreotta, 2021 ; Basl et al, 2020 ; Bennett and Daly, 2020 ; Dignum, 2020 ; Gunkel, 2020 ; Kingwell, 2020 ; Osborne, 2020 ; Powell, 2020 ; Serafimova, 2020 ; Wheeler, 2020 ; De Pagter, 2021 ; Gabriel, 2021 ; Gogoshin, 2021 ; Gordon, 2021 ; Gunkel and Wales, 2021 ; Joshua, 2021 ; Kiršienė et al, 2021 ; Martínez and Winter 2021 ; Schröder, 2021 ; Singer, 2021 ).…”
Section: Introductionmentioning
confidence: 99%