2019
DOI: 10.48550/arxiv.1902.07823
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Stable and Fair Classification

Lingxiao Huang,
Nisheeth K. Vishnoi

Abstract: Fair classification has been a topic of intense study in machine learning, and several algorithms have been proposed towards this important task. However, in a recent study, Friedler et al. observed that fair classification algorithms may not be stable with respect to variations in the training dataset -a crucial consideration in several real-world applications. Motivated by their work, we study the problem of designing classification algorithms that are both fair and stable. We propose an extended framework b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 28 publications
(54 reference statements)
0
12
0
Order By: Relevance
“…Researchers in academia and industry are building tools to help detect biases in algorithms [6,52,65,77]. Additionally, there is a growing library of fairness-aware machine learning algorithms for classification [31,38,43,57], regression [2,7], causal inference [51,60], word embeddings [11,12], machine translation [25] and finally, ranking [18,73,81].…”
Section: Algorithmic Fairnessmentioning
confidence: 99%
“…Researchers in academia and industry are building tools to help detect biases in algorithms [6,52,65,77]. Additionally, there is a growing library of fairness-aware machine learning algorithms for classification [31,38,43,57], regression [2,7], causal inference [51,60], word embeddings [11,12], machine translation [25] and finally, ranking [18,73,81].…”
Section: Algorithmic Fairnessmentioning
confidence: 99%
“…Most proposed methods treat solving for fairness based on the definition of fairness tailored to their specific objective. Of considerable importance are techniques such as those proposed by [220] which not only satisfy fairness constraints, but also tend to be stable towards adversarial attacks and variations in datasets during testing. Regression-based fairness techniques eliminate bias at training time by hand-crafting loss functions that conform to group fairness, individual fairness or hybrid fairness, although they have not received a lot of attention in research [221].…”
Section: Assessing Data Leakage For Defence Purposesmentioning
confidence: 99%
“…The use of algorithms to aid critical decision making processes in the government and the industry has attracted commensurate scrutiny from academia, lawmakers and social justice workers in recent times [4,7,71], because ML systems trained on a snapshot of the society has the unintended consequences of learning, propagating and amplifying historical social biases and power dynamics [5,56]. The current research landscape consists of both ML explanation methods and fairness metrics to try and uncover the problems of trained models [8,30,45,59,68], and fairness aware ML algorithms, for instance classification [31,34,37,47], regression [2,9], causal inference [43,49], word embeddings [13,14] and ranking [16,64,72].…”
Section: Algorithmic Fairnessmentioning
confidence: 99%