2022
DOI: 10.48550/arxiv.2206.00667
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

How Biased is Your Feature?: Computing Fairness Influence Functions with Global Sensitivity Analysis

Abstract: Fairness in machine learning has attained significant focus due to the widespread application of machine learning in high-stake decision-making tasks. Unless regulated with a fairness objective, machine learning classifiers might demonstrate unfairness/bias towards certain demographic populations in the data. Thus, the quantification and mitigation of the bias induced by classifiers have become a central concern. In this paper, we aim to quantify the influence of different features on the bias of a classifier.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 29 publications
(43 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?