2021
DOI: 10.3389/frobt.2021.744426
|View full text |Cite
|
Sign up to set email alerts
|

Challenging the Neo-Anthropocentric Relational Approach to Robot Rights

Abstract: When will it make sense to consider robots candidates for moral standing? Major disagreements exist between those who find that question important and those who do not, and also between those united in their willingness to pursue the question. I narrow in on the approach to robot rights called relationalism, and ask: if we provide robots moral standing based on how humans relate to them, are we moving past human chauvinism, or are we merely putting a new dress on it? The background for the article is the clash… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 43 publications
0
7
0
Order By: Relevance
“…While I will not discuss machine agency and anthropomorphism here, it is interesting to note how modern AI inspires such perceptions and descriptions, as it relates to its motivational scaffolding functions. This is also important for understanding how AI can be construed as other even by those, like me, who believe that AI systems are severely limited when it comes to abilities of being meaningful social partners capable or reciprocal relationships (Saetra, 2020(Saetra, , 2021. For the sake of the current discussion, it might be sufficient that learners experience these systems are meaningful others, and this finds support in the relational turn in the field of robot ethics (Coeckelbergh, 2010;Gerdes, 2016;Gunkel, 2018).…”
Section: Alphago and Lee Sedolmentioning
confidence: 89%
“…While I will not discuss machine agency and anthropomorphism here, it is interesting to note how modern AI inspires such perceptions and descriptions, as it relates to its motivational scaffolding functions. This is also important for understanding how AI can be construed as other even by those, like me, who believe that AI systems are severely limited when it comes to abilities of being meaningful social partners capable or reciprocal relationships (Saetra, 2020(Saetra, , 2021. For the sake of the current discussion, it might be sufficient that learners experience these systems are meaningful others, and this finds support in the relational turn in the field of robot ethics (Coeckelbergh, 2010;Gerdes, 2016;Gunkel, 2018).…”
Section: Alphago and Lee Sedolmentioning
confidence: 89%
“…Although errors arise from the robot itself, it could also be the user or others involved in its use. 5 Although robots' rights are often considered and at first sight as non-anthropocentric, 28 it is understood that human beings are allocated with stewardship roles to the rest of the environment and all other creations, thus highlighting the need to be watchful of non-humans such as intelligent robots and artificial intelligence when applied in the healthcare system by nurses in clinical practice. In this argument, robotics application should be applied with the expectations that the human actors will bear all liabilities and accountabilities related to robotic or technological failures in general.…”
Section: Anthropocentrismmentioning
confidence: 99%
“…Ethical biocentrism is an approach where entities are assumed as the origins of value and entirely detached from their instrumental value including how humans imagine their value. 28 Humans are sidelined by way of endowing technology with autonomous ability. Thus, responsibility is relieved on the part of individuals; providing an excuse for oneself and others to reject responsibility.…”
Section: Biocentrismmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, the consequences of anthropomorphism in human-technology engagements have prompted researchers to evaluate the ethical risks of endowing technology with humanlike design components (Złotowski et al, 2015;de Graaf, 2016;Saetra, 2020Saetra, , 2021aSaetra, , 2021b. At their core, such ethical criticisms, sometimes referred to as the 'forensic problem of anthropomorphism' (Złotowski et al, 2015), consider the creation of humanlike technology as a form of deception, because they 'trick' humans, especially children, into overestimating the technology's 'true' capacities (Sharkey & Sharkey, 2010), or into engaging with the technology in ways which are supposed to be reserved for genuine human-human engagements (Saetra, 2020).…”
Section: What Are Its Consequences? Anthropomorphism From a Normative Research Perspectivementioning
confidence: 99%