2021
DOI: 10.1007/s00146-021-01302-0
|View full text |Cite
|
Sign up to set email alerts
|

Many hands make many fingers to point: challenges in creating accountable AI

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 45 publications
0
3
0
Order By: Relevance
“…Systemic and implicit biases such as racism and other forms of discrimination can inadvertently manifest in AI through the data used in training, as well as through the institutional policies and practices underlying how AI is commissioned, developed, deployed, and used. Statistical/algorithmic and human cognitive and perceptual biases enter the engineering and modeling processes themselves, and an inability to properly validate model performance leaves these biases exposed during deployment [62,103,112,113]. These biases collide with the cognitive biases of the individuals interacting with the AI systems as users, experts in the loop, or other decision makers.…”
Section: Valuesmentioning
confidence: 99%
See 1 more Smart Citation
“…Systemic and implicit biases such as racism and other forms of discrimination can inadvertently manifest in AI through the data used in training, as well as through the institutional policies and practices underlying how AI is commissioned, developed, deployed, and used. Statistical/algorithmic and human cognitive and perceptual biases enter the engineering and modeling processes themselves, and an inability to properly validate model performance leaves these biases exposed during deployment [62,103,112,113]. These biases collide with the cognitive biases of the individuals interacting with the AI systems as users, experts in the loop, or other decision makers.…”
Section: Valuesmentioning
confidence: 99%
“…However, such models can exacerbate statistical biases because restrictive assumptions on the training data often do not hold with nuanced demographics. Furthermore, designers who must make decisions on what variables to include or exclude can impart their own cognitive biases into the model [112,184]. Complex models are often used on nonlinear, multimodal data such as text and images.…”
Section: Algorithmic Effectsmentioning
confidence: 99%
“…Examples of values that can be designed include "data privacy, accessibility, responsibility, accountability, transparency, explainability, efficiency, consent, inclusivity, diversity, security, and control" (Hasselbalch, 2021). (Also see Slota, et al, 2021;Umbrello & Van de Poel, 2021). Different stakeholders, ranging from developers to users, institutions or commercial entities, have different (often conflicting) data interests (Delgado, et al, 2021).…”
Section: Data Interestsmentioning
confidence: 99%
“…Unsurprisingly, ethical principles of AI are a major theme for many of the authors whose work appears here. For example, Slota et al (2021) conducted interviews with 26 stakeholders to explore the challenges of AI, including the distribution of agent empowerment and the difficulty of creating accountable systems. They propose the creation of accountable sociotechnical systems (cf.…”
Section: The Ethical Principles Of Aimentioning
confidence: 99%