2020
DOI: 10.1016/j.artint.2019.103179
|View full text |Cite
|
Sign up to set email alerts
|

Artificial systems with moral capacities? A research design and its implementation in a geriatric care system

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 3 publications
0
12
0
Order By: Relevance
“…The included studies indicated practical approaches to responsible AI innovation in LTC (see Table 2 ). Most papers report on responsible AI principles such as privacy, security, transparency, autonomy, trust, justice, and fairness ( n = 22), while three papers discuss measures to address responsible AI innovation that are independent of principles ( Misselhorn, 2020 ; Poulsen & Burmeister, 2019 ; Yew, 2020 ).…”
Section: Resultsmentioning
confidence: 99%
“…The included studies indicated practical approaches to responsible AI innovation in LTC (see Table 2 ). Most papers report on responsible AI principles such as privacy, security, transparency, autonomy, trust, justice, and fairness ( n = 22), while three papers discuss measures to address responsible AI innovation that are independent of principles ( Misselhorn, 2020 ; Poulsen & Burmeister, 2019 ; Yew, 2020 ).…”
Section: Resultsmentioning
confidence: 99%
“…However, as I have argued, the behavior an AI would evoke in interaction with a human would depend on which kind of character the interacting human already has. Recently, Misselhorn, for example, has suggested methodological guidelines for developing AI-systems in geriatric care, taking into consideration the specific needs of older people (Misselhorn, 2020). Secondly, Fröding and Peterson do not consider whether and in which way the interacting humans are in a psychological or emotional process of developing their character.…”
Section: Final Discussion and Summarymentioning
confidence: 99%
“…Current and foreseeable technology lacks free will, which would, therefore, preclude machines from having moral agency. However, it is debated whether human-like prerequisites for moral agency should be imposed on machines or if a hard line should be drawn between human moral agency and that of machines [30,33,34,49].…”
Section: About Moral Agency Of Amasmentioning
confidence: 99%