2020
DOI: 10.1016/j.procs.2020.02.219
|View full text |Cite
|
Sign up to set email alerts
|

AGI Protocol for the Ethical Treatment of Artificial General Intelligence Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 2 publications
0
3
0
Order By: Relevance
“…Though the paper does not present a moral argument, the author encourages that they be granted rights once their capabilities advance sufficiently Kaufman ( 1994 ) Criticizing a common view among environmental philosophers, Kaufman argues that, “either machines have interests (and hence moral standing) too or mentality is a necessary condition” for moral consideration. Additionally, Kaufman argues that “the aspect of mentality necessary for having interests is more complicated than mere sentience” Kelley and Atreides ( 2020 ) Kelley and Atreides describe “a laboratory process for the assessment and ethical treatment of Artificial General Intelligence systems that could be conscious and have subjective emotional experiences ‘theoretically.’” They claim that, “there are now systems—including ones in our lab—that… are potentially conscious entities” and note that, “[t]he fundamental assumption of this Protocol is that the treatment of sapient and sentient entities matters ethically” Khoury ( 2016 ) Different approaches to the rights of and liabilities for “human-like robots” are evaluated. Khoury believes that they are “not alive” and if they go “rouge,” would need to either be “fixed” or “terminated” Kim and Petrina ( 2006 ) Kim and Petrina discuss the computer game The Sims and place this in the context of some previous discussions of robot rights Kiršienė and Amilevičius ( 2020 ) The authors examine AI legal issues in the context of the European Parliament’s proposals, from a legal and technological perspective.…”
Section: Appendixmentioning
confidence: 99%
See 1 more Smart Citation
“…Though the paper does not present a moral argument, the author encourages that they be granted rights once their capabilities advance sufficiently Kaufman ( 1994 ) Criticizing a common view among environmental philosophers, Kaufman argues that, “either machines have interests (and hence moral standing) too or mentality is a necessary condition” for moral consideration. Additionally, Kaufman argues that “the aspect of mentality necessary for having interests is more complicated than mere sentience” Kelley and Atreides ( 2020 ) Kelley and Atreides describe “a laboratory process for the assessment and ethical treatment of Artificial General Intelligence systems that could be conscious and have subjective emotional experiences ‘theoretically.’” They claim that, “there are now systems—including ones in our lab—that… are potentially conscious entities” and note that, “[t]he fundamental assumption of this Protocol is that the treatment of sapient and sentient entities matters ethically” Khoury ( 2016 ) Different approaches to the rights of and liabilities for “human-like robots” are evaluated. Khoury believes that they are “not alive” and if they go “rouge,” would need to either be “fixed” or “terminated” Kim and Petrina ( 2006 ) Kim and Petrina discuss the computer game The Sims and place this in the context of some previous discussions of robot rights Kiršienė and Amilevičius ( 2020 ) The authors examine AI legal issues in the context of the European Parliament’s proposals, from a legal and technological perspective.…”
Section: Appendixmentioning
confidence: 99%
“…Tomasik ( 2011 ), Bostrom ( 2014 ), Gloor ( 2016a ), and Sotala and Gloor ( 2017 ) argue that the insufficient moral consideration of sentient artificial entities, such as the subroutines or simulations run by a future superintelligent AI, could lead to astronomical amounts of suffering. Kelley and Atreides ( 2020 ) have already proposed a “laboratory process for the assessment and ethical treatment of Artificial General Intelligence systems that could be conscious and have subjective emotional experiences.”…”
Section: Introductionmentioning
confidence: 99%
“…The possibility of AGI disaster is based on the idea that AGI will one day outsmart humanity, seize control of the world, and achieve whatever objectives it is programmed to pursue. Catastrophe will ensue unless it is configured with aims that are secure for mankind, or something else one cares about [226].…”
Section: And Explainability: a Machine Learning Zoo Mini-tour And Explainable Ai A Review Of Machine Learning Interpretability Methods Pamentioning
confidence: 99%