2020
DOI: 10.1257/pandp.20201036
|View full text |Cite
|
Sign up to set email alerts
|

An Economic Perspective on Algorithmic Fairness

Abstract: There are widespread concerns that the growing use of machine learning algorithms in important decisions may reproduce and reinforce existing discrimination against legally protected groups. Most of the attention to date on issues of “algorithmic bias” or “algorithmic fairness” has come from computer scientists and machine learning researchers. We argue that concerns about algorithmic fairness are at least as much about questions of how discrimination manifests itself in data, decision-making under uncertainty… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 51 publications
(33 citation statements)
references
References 6 publications
0
28
0
Order By: Relevance
“…Given the known possible biases with historical data, we investigated whether protocols related to the labeling of the training data could have an impact on performance. 65 , 66…”
Section: Methodsmentioning
confidence: 99%
“…Given the known possible biases with historical data, we investigated whether protocols related to the labeling of the training data could have an impact on performance. 65 , 66…”
Section: Methodsmentioning
confidence: 99%
“…Researchers at LinkedIn showed that the Atkinson index could be used to promote more equitable design choices when used in A/B testing (Saint-Jacques et al 2020). More generally, there has been much work by economists and social scientists to frame the questions surrounding algorithmic fairness (Rambachan et al 2020;Cowgill and Tucker 2020). One particularly relevant work emphasizes that equality and power are often better ways to frame questions of algorithmic harms, and it provides a basis for new metrics for computing quantities relevant to these issues from economic theory (Kasy and Abebe 2021).…”
Section: Prior Workmentioning
confidence: 99%
“…Dwork et al (2012) calls for the idea that similar individuals should be treated similarly, which requires an appropriate measure of similarity. Group-based fairness requires that algorithms have equal errors rate across groups defined by protected attributes such as race and gender (Hardt et al 2016, Kleinberg et al 2018, Zafar et al 2019, Rambachan et al 2020, Mitchell et al 2021. Popular fairness criteria include demographic parity, equal of opportunity, and equalized odds, to name a few.…”
Section: Algorithmic Fairnessmentioning
confidence: 99%