2019
DOI: 10.48550/arxiv.1903.01209
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning

Abstract: Most existing notions of algorithmic fairness are one-shot: they ensure some form of allocative equality at the time of decision making, but do not account for the adverse impact of the algorithmic decisions today on the long-term welfare and prosperity of certain segments of the population. We take a broader perspective on algorithmic fairness. We propose an effort-based measure of fairness and present a data-driven framework for characterizing the long-term impact of algorithmic policies on reshaping the und… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…Our work contributes to similar e orts in fair machine learning literature towards broadening the scope of analysis to include these e ects [38,28,17]. Moreover, in exploring the impact of these dynamics, our work goes beyond assessments of algorithmic performance in static settings, furthering research on the long-term impact of proposed interventions [30,29,37].…”
Section: Related Workmentioning
confidence: 83%
“…Our work contributes to similar e orts in fair machine learning literature towards broadening the scope of analysis to include these e ects [38,28,17]. Moreover, in exploring the impact of these dynamics, our work goes beyond assessments of algorithmic performance in static settings, furthering research on the long-term impact of proposed interventions [30,29,37].…”
Section: Related Workmentioning
confidence: 83%
“…Liu et al [32] and Kannan et al [26] study how a utility-maximizing decision-maker may respond to the predictions made by a predictive rule (e.g., the decision-maker may interpret/utilize the predictions a certain way or decide to update the model entirely.) Mouzannar et al [35] and Heidari et al [21] propose several dynamics for how individuals within a population may react to predictive rules by changing their qualifications. Dong et al [14], Hu et al [24], Milli et al [34] address strategic classification-a setting in which decision subjects are assumed to respond strategically and potentially untruthfully to the choice of the predictive model, and the goal is to design classifiers that are robust to strategic manipulation.…”
Section: Related Workmentioning
confidence: 99%
“…For algorithmic fairness sequential setting, one line of works have studied fairness in the sequential learning setting while not considering long-term impact of actions Joseph et al [2016], Bechavod et al [2019], Liu et al [2017], Gupta and Kamble [2019], Gillen et al [2018]. For the study on delayed impacts of actions, recent works mostly focus on addressing the one-step delayed impacts or a multi-step sequential setting with full information [Heidari et al, 2019, Hu and Chen, 2018, Liu et al, 2018, Mouzannar et al, 2019, Cowgill and Tucker, 2019, Bartlett et al, 2018. Our work differs from the above and studies delayed impacts of actions in sequential decision making under uncertainty.…”
Section: Related Workmentioning
confidence: 99%
“…While being a relatively under-explored (but important) topic, several recent works have looked into this problem of delayed impact of actions in algorithmic fairness, and the results have focused on understanding the impact in a one-step delay of actions [Liu et al, 2018, Kannan et al, 2019, Heidari et al, 2019, or a sequential decision making setting without uncertainty [Mouzannar et al, 2019, Zhang et al, 2019, Hu and Chen, 2018.…”
Section: Introductionmentioning
confidence: 99%