Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/311
|View full text |Cite
|
Sign up to set email alerts
|

Preventing Disparate Treatment in Sequential Decision Making

Abstract: We study fairness in sequential decision making environments, where at each time step a learning algorithm receives data corresponding to a new individual (e.g. a new job application) and must make an irrevocable decision about him/her (e.g. whether to hire the applicant) based on observations made so far. In order to prevent cases of disparate treatment, our time-dependent notion of fairness requires algorithmic decisions to be consistent: if two individuals are similar in the feature space and arrive during … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
29
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 30 publications
(32 citation statements)
references
References 8 publications
2
29
0
Order By: Relevance
“…Only recently has the literature started looking at the implications of fairness constraints in online learning settings. Our model and results complement a small but growing body of literature in this domain (Joseph et al 2016b, Liu et al 2017, Gillen et al 2018, Heidari and Krause 2018, Celis et al 2018, Joseph et al 2016a, Elzayn et al 2018). The two papers most related to our work are Joseph et al (2016b) and Heidari and Krause (2018) that we discuss below.…”
Section: Related Literaturesupporting
confidence: 81%
See 2 more Smart Citations
“…Only recently has the literature started looking at the implications of fairness constraints in online learning settings. Our model and results complement a small but growing body of literature in this domain (Joseph et al 2016b, Liu et al 2017, Gillen et al 2018, Heidari and Krause 2018, Celis et al 2018, Joseph et al 2016a, Elzayn et al 2018). The two papers most related to our work are Joseph et al (2016b) and Heidari and Krause (2018) that we discuss below.…”
Section: Related Literaturesupporting
confidence: 81%
“…Our model and results complement a small but growing body of literature in this domain (Joseph et al 2016b, Liu et al 2017, Gillen et al 2018, Heidari and Krause 2018, Celis et al 2018, Joseph et al 2016a, Elzayn et al 2018). The two papers most related to our work are Joseph et al (2016b) and Heidari and Krause (2018) that we discuss below. Joseph et al (2016b) were one of the earliest to study the impact of fairness constraints on learning in a contextual multi-armed bandit setting under a utility maximization objective.…”
Section: Related Literaturesupporting
confidence: 81%
See 1 more Smart Citation
“…In the organizational justice literature, distributive justice is often assessed either by comparing an individual's inputs to their outputs (i.e., within individuals) or by comparing an individual's outcomes to others' outcomes (i.e., between individuals or groups). In our review of the AI fairness literature, we found that distributive fairness was overwhelmingly measured by comparing outcomes across people rather within the individual (e.g., Glymour & Herington, 2018 � ;Grgić-Hlača et al, 2018;Heidari & Krause, 2018). For example, Heidari and Krause (2018) argued that fairness should be measured by determining whether similar individuals should be assigned similar outcomes by the algorithm.…”
Section: Ai and Distributive Fairnessmentioning
confidence: 99%
“…In our review of the AI fairness literature, we found that distributive fairness was overwhelmingly measured by comparing outcomes across people rather within the individual (e.g., Glymour & Herington, 2018 � ;Grgić-Hlača et al, 2018;Heidari & Krause, 2018). For example, Heidari and Krause (2018) argued that fairness should be measured by determining whether similar individuals should be assigned similar outcomes by the algorithm. This focus on assessing fairness among individuals or groups might be from the AI fairness literature's tendency to measure distributed fairness from the outside (i.e., detecting disparate impact of outcomes via a mathematically validated algorithm or legal standard) rather than from the perspective of the impacted individual.…”
Section: Ai and Distributive Fairnessmentioning
confidence: 99%