2017
DOI: 10.1126/scirobotics.aan6080
|View full text |Cite
|
Sign up to set email alerts
|

Transparent, explainable, and accountable AI for robotics

Abstract: Fair, accountable AI and robotics need precise regulation and better methods to certify, explain, and audit inscrutable systems.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
103
0
2

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 223 publications
(118 citation statements)
references
References 3 publications
(1 reference statement)
2
103
0
2
Order By: Relevance
“…First, our research contributes to research on moral outrage. By demonstrating the role of attribution of prejudiced motivation we contribute the literature suggesting that moral outrage is not only a response to harm (Hechler & Kessler, 2018 Third, our research complements current work in computer science, legal studies, and other disciplines on how to create fair algorithms (Abdul, Vermeulen, Wang, Lim, & Kankanhalli, 2018;Ananny, 2016;Kusner & Loftus, 2020;Sandvig, Hamilton, Karahalios, & Langbort, 2016;Selbst & Barocas, 2018;Wachter, Mittelstadt, & Floridi, 2017;Zou & Schiebinger, 2018). While this work on "algorithm ethics" discusses how to create fair algorithms, our work starts exploring the psychological response of biased algorithms, and how they differ from the psychological response to biased humans.…”
Section: Discussionmentioning
confidence: 57%
“…First, our research contributes to research on moral outrage. By demonstrating the role of attribution of prejudiced motivation we contribute the literature suggesting that moral outrage is not only a response to harm (Hechler & Kessler, 2018 Third, our research complements current work in computer science, legal studies, and other disciplines on how to create fair algorithms (Abdul, Vermeulen, Wang, Lim, & Kankanhalli, 2018;Ananny, 2016;Kusner & Loftus, 2020;Sandvig, Hamilton, Karahalios, & Langbort, 2016;Selbst & Barocas, 2018;Wachter, Mittelstadt, & Floridi, 2017;Zou & Schiebinger, 2018). While this work on "algorithm ethics" discusses how to create fair algorithms, our work starts exploring the psychological response of biased algorithms, and how they differ from the psychological response to biased humans.…”
Section: Discussionmentioning
confidence: 57%
“…Complexity, variability, subjectivity, and lack of standardisation, including variable interpretation of the 'components' of each of the ethical principles, make this challenging (Alshammari & Simpson, 2017). However, it is achievable if the right questions are asked (Green, 2018) (Wachter, Mittelstadt, & Floridi, 2017a) and closer attention is payed to how the design process can influence (Kroll, 2018) whether an algorithm is more or less 'ethically-aligned.' Thus, this is the aim of this research project: to identify the methods and tools available to help developers, engineers and designers of ML specifically (but we hope the results of the this research may be easily applicable to other branches of AI) reflect on and apply 'ethics' (Adamson, Havens, & Chatila, 2019) so that they may know not only what to do or not to do, but also how to do it, or avoid doing it (Alshammari & Simpson, 2017).…”
Section: Moving From Principles To Practicementioning
confidence: 99%
“…Today, American and European policies are divergent regarding accountability of AI . The US is moving towards ethical design, education, and self‐regulation, while the European Union is focusing on individual rights by adopting a regulatory approach in the Regulation of 27 April 2016 on the personal data . It is for this reason that Wachter et al (2017) state that “Regulatory standards need to be developed to set system‐ and context‐dependent accountability requirements based on potential bias and discriminatory decision‐making and risks to safety, fairness, and privacy” .…”
Section: Responsibilitymentioning
confidence: 99%