2021
DOI: 10.48550/arxiv.2107.04423
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multiaccurate Proxies for Downstream Fairness

Abstract: We study the problem of training a model that must obey demographic fairness conditions when the sensitive features are not available at training time -in other words, how can we train a model to be fair by race when we don't have data about race? We adopt a fairness pipeline perspective, in which an "upstream" learner that does have access to the sensitive features will learn a proxy model for these features from the other attributes. The goal of the proxy is to allow a general "downstream" learnerwith minima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…However, Awasthi et al [2021] show that due to different underlying base rates across groups, the Bayes optimal predictor for the demographic group information can result in maximally biased estimate of unfairness. Diana et al [2021] demonstrate that one can rely on a multi-accurate regressor, which was first introduced by Kim et al [2019], as opposed to a 0-1 classifier in order to estimate the unfairness without any bias and also build a fair classifier for downstream tasks. When only some data points are missing demographic information, Jeong et al [2021] show how to bypass the need to explicitly impute the missing values and instead rely on some decision tree based approach in order to optimize a fairness-regularized objective function.…”
Section: Related Workmentioning
confidence: 99%
“…However, Awasthi et al [2021] show that due to different underlying base rates across groups, the Bayes optimal predictor for the demographic group information can result in maximally biased estimate of unfairness. Diana et al [2021] demonstrate that one can rely on a multi-accurate regressor, which was first introduced by Kim et al [2019], as opposed to a 0-1 classifier in order to estimate the unfairness without any bias and also build a fair classifier for downstream tasks. When only some data points are missing demographic information, Jeong et al [2021] show how to bypass the need to explicitly impute the missing values and instead rely on some decision tree based approach in order to optimize a fairness-regularized objective function.…”
Section: Related Workmentioning
confidence: 99%
“…Given a class of groups G, multiaccuracy requires that the expectation of a predictor is close to expectation of y conditioned on any g ∈ G (Hébert-Johnson et al, 2018;Diana et al, 2021). Kim et al (2019) showed that for an appropriate choice of groups, multiaccuracy implies a type of multi-group learnability result.…”
Section: Related Workmentioning
confidence: 99%