2021
DOI: 10.48550/arxiv.2106.00772
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Information Theoretic Measures for Fairness-aware Feature Selection

Sajad Khodadadian,
Mohamed Nafea,
AmirEmad Ghassami
et al.

Abstract: Machine learning algorithms are increasingly used for consequential decision making regarding individuals based on their relevant features. Features that are relevant for accurate decisions may however lead to either explicit or implicit forms of discrimination against unprivileged groups, such as those of certain race or gender. This happens due to existing biases in the training data, which are often replicated or even exacerbated by the learning algorithm. Identifying and measuring these biases at the data … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…In Section 4, we review how PID can help in assessing the contributions of either features or data points with applications in feature selection (as discussed in [31]). Related works include [35][36][37][38]. In Section 5, we discuss another avenue where PID plays an important role: quantifying tradeoffs between different measures, as we illustrate through the example of local and global fairness in federated learning (as discussed in [34]).…”
Section: Scenario 3: Formalizing Tradeoffs In Distributed Environmentsmentioning
confidence: 99%
See 2 more Smart Citations
“…In Section 4, we review how PID can help in assessing the contributions of either features or data points with applications in feature selection (as discussed in [31]). Related works include [35][36][37][38]. In Section 5, we discuss another avenue where PID plays an important role: quantifying tradeoffs between different measures, as we illustrate through the example of local and global fairness in federated learning (as discussed in [34]).…”
Section: Scenario 3: Formalizing Tradeoffs In Distributed Environmentsmentioning
confidence: 99%
“…A closely related direction of research that bridges fairness and explainability is the problem of feature selection for algorithmic fairness [35][36][37]53]. In [35,37], the authors propose novel information-theoretic techniques that leverage conditional mutual information with the goal of selecting a subset of features that would achieve fairness, in particular, justifiable fairness [47].…”
Section: Notable Related Work Bridging Fairness Explainability and In...mentioning
confidence: 99%
See 1 more Smart Citation
“…However, [2,[12][13][14][15][16][17][18][19] focus on quantifying exempt and non-exempt discrimination (either using observational measures or causal modelling) given a choice of critical features, rather than quantifying the contribution of each individual feature. Alternatively, [33,34] propose information-theoretic techniques to carefully select features for fair decision making. In this work, we focus on explaining the contributions of all the individual features to the overall disparity, even when we may not have access to the exact decision-making mechanism (including scenarios with human-in-the-loop).…”
Section: Case Study On An Artificial Admissions Datasetmentioning
confidence: 99%