2021
DOI: 10.1007/s43681-021-00054-3
|View full text |Cite
|
Sign up to set email alerts
|

A framework to contest and justify algorithmic decisions

Abstract: In this paper, we argue that the possibility of contesting the results of Algorithmic Decision Systems (ADS) is a key requirement for ADS used to make decisions with a high impact on individuals. We discuss the limitations of explanations and motivate the need for better facilities to contest or justify the results of an ADS. While the goal of an explanation is to make it possible for a human being to understand, the goal of a justification is to convince that the decision is good or appropriate. To claim that… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Although not the focus of this work, perception of fairness can be modeled as a probabilistic approach to determine whether a norm (e.g., non-discrimination regulation) is breached [26]. People also perceive fairness through a reflective, justificatory process of moral reasoning [27], which typically happens only after an action is taken.…”
Section: Fairnessmentioning
confidence: 99%
See 1 more Smart Citation
“…Although not the focus of this work, perception of fairness can be modeled as a probabilistic approach to determine whether a norm (e.g., non-discrimination regulation) is breached [26]. People also perceive fairness through a reflective, justificatory process of moral reasoning [27], which typically happens only after an action is taken.…”
Section: Fairnessmentioning
confidence: 99%
“…Normative approaches may also be released by government organizations as strategic plans. For example, the US National Artificial Intelligence Research and Development Strategic Plan [37] states that researchers need to improve fairness, accountability and safety in AI, including building ethical AI (p. [26][27]. Our goal in this paper is to provide specific methods of failure-assessment-based risk analysis that can help to achieve human-centered AI that is fair.…”
Section: Normative Approachesmentioning
confidence: 99%