Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 2021
DOI: 10.1145/3442188.3445894
|View full text |Cite
|
Sign up to set email alerts
|

Leave-one-out Unfairness

Abstract: We introduce leave-one-out unfairness, which characterizes how likely a model's prediction for an individual will change due to the inclusion or removal of a single other person in the model's training data. Leave-one-out unfairness appeals to the idea that fair decisions are not arbitrary: they should not be based on the chance event of any one person's inclusion in the training data. Leave-one-out unfairness is closely related to algorithmic stability, but it focuses on the consistency of an individual point… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(17 citation statements)
references
References 30 publications
0
14
0
Order By: Relevance
“…While in this work, we address the problem of multiplicitious deep models producing varying outputs on counterfactual examples, recent work has shown that there are large differences in model prediction behavior on any input across small changes to the model (Black & Fredrikson, 2021;Marx et al, 2019;D'Amour et al, 2020). Instability has also been shown to be a problem for gradient-based explanations, although this is largely studied in an adversarial context (Dombrowski et al, 2019;Ghorbani et al, 2019;Heo et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…While in this work, we address the problem of multiplicitious deep models producing varying outputs on counterfactual examples, recent work has shown that there are large differences in model prediction behavior on any input across small changes to the model (Black & Fredrikson, 2021;Marx et al, 2019;D'Amour et al, 2020). Instability has also been shown to be a problem for gradient-based explanations, although this is largely studied in an adversarial context (Dombrowski et al, 2019;Ghorbani et al, 2019;Heo et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…However, preventing inconsistent predictions and abstaining from uncertain predictions are different goals: in our setting, the aim is to predict the mode across models drawn from a certain distribution, whereas calibration is measured against predicting the true label. Moreover, prior work has shown that confidence scores may not be correlated with prediction consistency across models with different random initializations (Black and Fredrikson, 2021). Finally, while abstaining on points with low confidence scores may lead to greater consistency, it may not yield a guarantee, which this work provides.…”
Section: Related Workmentioning
confidence: 89%
“…As the initial parameters of the model tend to be determined by a random seed, we will interchangeably refer to this as the selection of random seed. More generally, both of these types of choices instantiate a broader class of choices that could be considered arbitrary, despite the fact that they may impact the predictions (Black and Fredrikson, 2021;Marx et al, 2019;Mehrer et al, 2020) (Section 5.1) and explanations (Section 5.2) of the resulting model.…”
Section: Notation and Preliminariesmentioning
confidence: 99%
See 2 more Smart Citations