2018 IEEE International Symposium on Information Theory (ISIT) 2018
DOI: 10.1109/isit.2018.8437807
|View full text |Cite
|
Sign up to set email alerts
|

Fairness in Supervised Learning: An Information Theoretic Approach

Abstract: Automated decision making systems are increasingly being used in real-world applications. In these systems for the most part, the decision rules are derived by minimizing the training error on the available historical data. Therefore, if there is a bias related to a sensitive attribute such as gender, race, religion, etc. in the data, say, due to cultural/historical discriminatory practices against a certain demographic, the system could continue discrimination in decisions by including the said bias in its de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
37
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 43 publications
(41 citation statements)
references
References 9 publications
0
37
0
Order By: Relevance
“…For ANNs specifically, these can also take the form of visualizations to describe what features or what abstract concepts an ANN has learned [23,24]. There have also been a number of information-theoretic approaches for measuring bias and promoting fairness in AI systems [25][26][27][28][29].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For ANNs specifically, these can also take the form of visualizations to describe what features or what abstract concepts an ANN has learned [23,24]. There have also been a number of information-theoretic approaches for measuring bias and promoting fairness in AI systems [25][26][27][28][29].…”
Section: Introductionmentioning
confidence: 99%
“…It is also straightforward to extend our work to alternative definitions of bias such as Equalized Odds[28,50] by suitably modifying Definition 2. However, it is not clear if such a measure can still be meaningfully interpreted as an "information flow".…”
mentioning
confidence: 99%
“…In a sense, this work treads a middle ground between two schools of thought, namely, demographic parity (Agarwal et al 2018;Ghassami, Khodadadian, and Kiyavash 2018), which enforces the criterion Z ⊥ ⊥Ŷ , and equalized odds (Hardt et al 2016;Ghassami, Khodadadian, and Kiyavash 2018), which enforces Z ⊥ ⊥Ŷ |Y (directly or through practical relaxations) where Y denotes the true labels of the historic dataset. Our selective quantification of non-exempt discrimination helps address one of the major criticisms against demographic parity, namely, that it can deliberately choose unqualified members from the protected group (Zemel et al 2013), e.g., by disregarding the critical features if they are correlated with Z.…”
mentioning
confidence: 99%
“…Alternatively, we can think of the language model with the lowest divergence across groups as the most fair language model; such a definition differs from standard Rawlsian fairness, but has been advocated inKamishima et al (2012);Ghassami et al (2018).…”
mentioning
confidence: 99%