2022
DOI: 10.1038/s42256-022-00533-0
|View full text |Cite
|
Sign up to set email alerts
|

Distinguishing two features of accountability for AI technologies

Abstract: Policymakers and researchers consistently call for greater human accountability for AI technologies. We should be clear about two distinct features of accountability.Across the AI ethics and global policy landscape, there is consensus that there should be human accountability for AI technologies 1 . These machines are used for high-stakes decision-making in complex domains -for example, in healthcare, criminal justice and transport -where they can cause or occasion serious harm. Some use deep machine learning … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
1
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 61 publications
0
1
0
1
Order By: Relevance
“…We are also interested in understanding how human-centred explanation could improve overall transparency (of the design and deployment process as well as the machine output) that is needed for assuring the ethical acceptability of AI in safetycritical applications [38], particularly in healthcare. Additionally, we are exploring the concept of accountability for AI in terms of two core features: giving an explanation and facing the consequences [39]. In particular, we are mapping how these two features relate to each other and the conditions under which the former may or may not be necessary for establishing the latter from a moral and legal perspective.…”
Section: Discussionmentioning
confidence: 99%
“…We are also interested in understanding how human-centred explanation could improve overall transparency (of the design and deployment process as well as the machine output) that is needed for assuring the ethical acceptability of AI in safetycritical applications [38], particularly in healthcare. Additionally, we are exploring the concept of accountability for AI in terms of two core features: giving an explanation and facing the consequences [39]. In particular, we are mapping how these two features relate to each other and the conditions under which the former may or may not be necessary for establishing the latter from a moral and legal perspective.…”
Section: Discussionmentioning
confidence: 99%
“…As discussed in Section 3, AI-based systems are not included amongst the subcategories of actor that can be morally responsible for an O, and they are therefore excluded from Figure 5. is to be open to giving a particular kind of explanation, specically a justication for their role in an AI-based system's outputs or impacts [85]. Not only does this not entail blame, it also helps to understand how to avoid future incidents and accidents, acknowledging this and learning from their reasons for action can help to improve processes and procedures.…”
Section: 24mentioning
confidence: 99%

Unravelling Responsibility for Ai

Porter,
Ryan,
Morgan
et al. 2024
Preprint
Self Cite
“…Primero, implica la exigencia a los actores responsables de una explicación (accountability -explanation) exhaustiva y detallada en relación con: (i) la decisión del sistema (acción o resultado); (ii) la relación causal entre un evento y un efecto dado; y (iii) la justi cación normativa de su implementación. Segundo, plantea responsabilizar (accountability-held responsible) moral y legalmente a los actores que traten de evitar las consecuencias de los resultados adversos del uso de sistemas algorítmicos (Porter et al, 2022).…”
Section: Transparencia Algorítmica: Conceptos Y Tipologíasunclassified