Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2018
DOI: 10.1145/3219819.3220046
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Approach to Quantifying Algorithmic Unfairness

Abstract: Discrimination via algorithmic decision making has received considerable attention. Prior work largely focuses on defining conditions for fairness, but does not define satisfactory measures of algorithmic unfairness. In this paper, we focus on the following question: Given two unfair algorithms, how should we determine which of the two is more unfair? Our core idea is to use existing inequality indices from economics to measure how unequally the outcomes of an algorithm benefit different individuals or groups … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 144 publications
(40 citation statements)
references
References 24 publications
0
26
0
Order By: Relevance
“…Because of the ubiquitous use of machine learning and artificial intelligence (AI) for decision making, there is an increasing urge in ensuring that these algorithmic decisions are fair, that is, they do not discriminate some groups over others, especially with groups that are defined over protected attributes, such as gender, race and nationality 1–7 . Despite all this growing research interest, fairness in decision making did not arise as a consequence of the use of machine‐learning and other predictive models in data science and AI 8,9 .…”
Section: Introductionmentioning
confidence: 99%
“…Because of the ubiquitous use of machine learning and artificial intelligence (AI) for decision making, there is an increasing urge in ensuring that these algorithmic decisions are fair, that is, they do not discriminate some groups over others, especially with groups that are defined over protected attributes, such as gender, race and nationality 1–7 . Despite all this growing research interest, fairness in decision making did not arise as a consequence of the use of machine‐learning and other predictive models in data science and AI 8,9 .…”
Section: Introductionmentioning
confidence: 99%
“…[173] propose the concept of counterfactual fairness which builds on causal fairness models and is related to both individual and group fairness concepts. [252] proposes a generalized entropy index which can be parameterized for different values of α and measures the individual impact of the classification outcome. This is similar to established distribution indices such as the Gini Index in economics.…”
Section: Individual and Counterfactual Fairness Metricsmentioning
confidence: 99%
“…constant multiple K) of the metric distance d X of those individuals themselves, that is d Y (f (x 1 ), f (x 2 ) ≤ Kd X (x 1 , x 2 ). The condition is expressed typically as a constraint an optimisation problem; (b) counterfactual fairness, where given causal models (U, V, F ) (U latent background variables, V = S X are observables, S are sensitive variables and F structural equation models, fairness requires that individual outcomes are equivalent if sensitive variables had varied; (c) generalised entropy comparing individualised prediction to average prediction accuracy (see (Speicher et al 2018) where fairness measures are drawn from axiomatic approaches in economics). In the quantum setting, outcomes are also probability distributions but available metrics are more limited where metrics must compare the similarity of quantum input states (the quantum analogue of d X (x 1 , x 2 )) and final output states).…”
Section: Calibrationmentioning
confidence: 99%