Human-Machine Shared Contexts 2020
DOI: 10.1016/b978-0-12-820543-3.00006-7
|View full text |Cite
|
Sign up to set email alerts
|

Deciding Machines: Moral-Scene Assessment for Intelligent Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…7). This moral perception reifies potential insults and injuries to persons for the AI to reason over and provides a basis for ethical behavioral choices in interacting with humans (Greenberg 2018).…”
Section: Ethical Aimentioning
confidence: 88%
See 1 more Smart Citation
“…7). This moral perception reifies potential insults and injuries to persons for the AI to reason over and provides a basis for ethical behavioral choices in interacting with humans (Greenberg 2018).…”
Section: Ethical Aimentioning
confidence: 88%
“…As AI systems evolve to operate autonomously, self-assess and self-regulate, and work effectively alongside humans to accomplish tasks, what will the roles be for the humans and (Asimov 1950), this decomposition (above, left) extracts the perceptual and reasoning capabilities needed for an AI-robot to engage in moral consideration. Ethical decision-making to prevent an AI from harming a human starts with perception (above, right), providing a basis for ethical behavior choices (Greenberg 2018) the robots? The following sections review two projections of the future relationship between humans and robots and then present an alternative possibility.…”
Section: Competing Visions Of the Future Of Ai-robotsmentioning
confidence: 99%
“…While it is common for researchers to use the term "humanmachine teaming" and to speak of "the human" doing this or that with "the machine," a core tenet of our analysis is that the species of intelligent animal is not the primary feature that distinguishes people from machines. Instead, a more salient difference between the two classes of teammates, in the spirit of Locke, Singer, and Strawson, is that humans are persons, who are able to reciprocally recognize the personhood of other humans, whereas machines are not (yet, if they ever could be) persons, and are not (yet) able to recognize personhood, even if they can discriminate humans from other species [1]. In making this distinction, we seek to highlight the fact that there is something special about those with personhood status (and people, in particular) that makes their dyadic behavior fundamentally distinct from that of their machine counterparts.…”
Section: Personhood and Relationships Over Speciesismmentioning
confidence: 99%
“…On the other hand, because of the importance of allowing humans to make (and to take responsibility for) their own decisions, a robot may not be robot-responsible for harms that could result from the risky behavior of a competent adult human. Some of the authors of this article are developing a "moral-scene assessment" technology for robots, or "moral vision" for short, that will be able to identify potential harms within a scene and correctly specify which harms ought to be avoided [30]. Completing this project, however, will require marshalling the ethical and practical knowledge of a range of academic fields such as biomedical ethics, neuroscience, law, economics, and philosophy, as well as the expertise and experience of professionals who have experience performing in the role intended to be occupied by the robot.…”
Section: The Challenges Of Nonmaleficencementioning
confidence: 99%