2019
DOI: 10.1038/s42256-019-0114-4
|View full text |Cite
|
Sign up to set email alerts
|

Principles alone cannot guarantee ethical AI

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
417
0
11

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 659 publications
(517 citation statements)
references
References 59 publications
4
417
0
11
Order By: Relevance
“…While robo-advisors can often be accessed more easily now via smartphonessimilarly to any other mobile financial service (e.g., Luarn & Lin, 2005;Malaquias & Hwang, 2016;Yen & Wu, 2016) our findings suggest that many people might still be reluctant to use such services, due to machines' lack of legitimacy to make decisions with a moral component (Bigman et al, 2019;Bigman & Gray, 2018). An interesting issue is what degree of responsibility "financial machines" will actually bear in future financial crises (e.g., J.-W. Hong & Williams, 2019), which touches the issues of ethical AI (Mittelstadt, 2019).…”
Section: Discussionmentioning
confidence: 80%
“…While robo-advisors can often be accessed more easily now via smartphonessimilarly to any other mobile financial service (e.g., Luarn & Lin, 2005;Malaquias & Hwang, 2016;Yen & Wu, 2016) our findings suggest that many people might still be reluctant to use such services, due to machines' lack of legitimacy to make decisions with a moral component (Bigman et al, 2019;Bigman & Gray, 2018). An interesting issue is what degree of responsibility "financial machines" will actually bear in future financial crises (e.g., J.-W. Hong & Williams, 2019), which touches the issues of ethical AI (Mittelstadt, 2019).…”
Section: Discussionmentioning
confidence: 80%
“…Such principles, for example, autonomy, justice or non-maleficence, can provide orientation. But they need to be embedded into a context-sensitive framework 31. On the other hand, patients’ openness to AI-driven health tools varies considerably across applications and countries 32.…”
Section: The Normative Challengesmentioning
confidence: 99%
“…Guidelines and frameworks are currently being developed to help ensure the accountable design, development, and use of digital health tools (Henson et al 2019;Torous et al 2019;Jessica Morley and Floridi 2019a). However, these developments will not easily transpose into non-clinical settings where comparable mechanisms of accountability and behavioural norms are lacking (Mittelstadt 2019). While there is, in principle, no a priori reason to doubt that such mechanisms could be established in non-clinical settings (e.g.…”
Section: Automating Interventionsmentioning
confidence: 99%