2018
DOI: 10.1126/science.aat5991
|View full text |Cite
|
Sign up to set email alerts
|

How AI can be a force for good

Abstract: An ethical framework will help to harness the potential of AI while keeping humans in control

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
164
0
6

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 337 publications
(171 citation statements)
references
References 12 publications
(12 reference statements)
1
164
0
6
Order By: Relevance
“…Especially economic incentives are easily overriding commitment to ethical principles and values. This implies that the purposes for which AI systems are developed and applied are not in accordance with societal values or fundamental rights such as beneficence, non-maleficence, justice, and explicability (Taddeo and Floridi 2018;Pekka et al 2018).…”
Section: Resultsmentioning
confidence: 99%
“…Especially economic incentives are easily overriding commitment to ethical principles and values. This implies that the purposes for which AI systems are developed and applied are not in accordance with societal values or fundamental rights such as beneficence, non-maleficence, justice, and explicability (Taddeo and Floridi 2018;Pekka et al 2018).…”
Section: Resultsmentioning
confidence: 99%
“…Moreover, ethically, it is not always clear where the responsibility for the performance and behaviour of such algorithms lie as they are constructed and implemented by numerous actors including designers, end-users and developers of both the hardware and software required. This issue has been termed 'distributed agency' that may need to be addressed by novel moral and legal frameworks (Taddeo and Floridi, 2018).…”
Section: The Potential Strengths and Limitations Of Machine Learningmentioning
confidence: 99%
“…A growing body of the literature covers questions of AI and ethical frameworks [1,[6][7][8][9][10], laws [3,[11][12][13][14] to govern the impact of AI and robotics [15], technical approaches like algorithmic impact assessments [16][17][18], and building trustworthiness through system validation [19]. These three guiding forces in AI governance (law, ethics and technology) can be complementary [ However, the debate on when which approach (or combination of approaches) is most relevant is unresolved, as Nemitz and Pagallo expertly highlight in this issue [13,17].…”
Section: Introductionmentioning
confidence: 99%