2019
DOI: 10.48550/arxiv.1902.04783
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning

Abstract: Fairness for Machine Learning has received considerable attention, recently. Various mathematical formulations of fairness have been proposed, and it has been shown that it is impossible to satisfy all of them simultaneously. The literature so far has dealt with these impossibility results by quantifying the tradeoffs between different formulations of fairness. Our work takes a different perspective on this issue. Rather than requiring all notions of fairness to (partially) hold at the same time, we ask which … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 11 publications
0
12
0
Order By: Relevance
“…For instance, Lee et al studied people's perception of trust, fairness, and justice in the context of algorithmic decision-making [56,57] and proposed how to embed these views into a policymaking framework [58]. Other scholars explored people's perceptions of procedural [41] and distributive [84,90] aspects of algorithmic fairness and studied how they relate to individual differences [42,76,108]. Nonetheless, little attention is paid to the public attribution of (moral) responsibility to stakeholders (e.g., [43,56,81]), particularly the prospect of responsibility ascription to the AI system per se.…”
Section: Responsibility Fairness Trust In Hci Literaturementioning
confidence: 99%
“…For instance, Lee et al studied people's perception of trust, fairness, and justice in the context of algorithmic decision-making [56,57] and proposed how to embed these views into a policymaking framework [58]. Other scholars explored people's perceptions of procedural [41] and distributive [84,90] aspects of algorithmic fairness and studied how they relate to individual differences [42,76,108]. Nonetheless, little attention is paid to the public attribution of (moral) responsibility to stakeholders (e.g., [43,56,81]), particularly the prospect of responsibility ascription to the AI system per se.…”
Section: Responsibility Fairness Trust In Hci Literaturementioning
confidence: 99%
“…There are several conflicting definitions of fairness, many of which are not simultaneously achievable [23]. The appropriate choice of a disparity metric is generally task dependent, but balancing error rates between different subgroups in a common consideration [8,17], with equal accuracy across subgroups being a popular choice in medical settings [42]. In this work we will consider the equality of opportunity notion of fairness and evaluate the rate of correct diagnosis in sick members of different attributes as well as missdiagnosis rate with no disease.…”
Section: Background and Related Workmentioning
confidence: 99%
“…An important axis of accountability of Artificial Intelligence systems is Fairness. There are multiple notions of what constitutes fairness in machine learning [15], to illustrate the effect of imputation on fairness and explainability we focus on group level notions of fairness. A system that is fair need not be explainable if the underlying algorithm can ensure that the various groups of interest are being scored in a fair manner even when the model is blackbox.…”
Section: Operationalizing Explanations With Imputationmentioning
confidence: 99%