Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society 2023
DOI: 10.1145/3600211.3604699
|View full text |Cite
|
Sign up to set email alerts
|

Not So Fair: The Impact of Presumably Fair Machine Learning Models

Mackenzie Jorgensen,
Hannah Richert,
Elizabeth Black
et al.

Abstract: If citing, it is advised that you check and use the publisher's definitive version for pagination, volume/issue, and date of publication details. And where the final published version is provided on the Research Portal, if citing you are again advised to check the publisher's website for any subsequent corrections.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 28 publications
(51 reference statements)
0
2
0
Order By: Relevance
“…This model shows that constraining selection policies to satisfy certain fairness criteria, such as demographic parity or equal opportunity, can sometimes lead to greater declines in qualification status compared to the unconstrained policy [64]. Jorgensen et al extend this result to several other fairness criteria [51]. Williams and Kolter model a similar loan approval setting as Liu et al, with slight modifications to the update function [91].…”
Section: Related Workmentioning
confidence: 92%
“…This model shows that constraining selection policies to satisfy certain fairness criteria, such as demographic parity or equal opportunity, can sometimes lead to greater declines in qualification status compared to the unconstrained policy [64]. Jorgensen et al extend this result to several other fairness criteria [51]. Williams and Kolter model a similar loan approval setting as Liu et al, with slight modifications to the update function [91].…”
Section: Related Workmentioning
confidence: 92%
“…However, theoretical results state that it is impossible to satisfy different fairness notions at the same time (Chouldechova, 2017;Kleinberg et al, 2017). Not only fairness notions are in tension among each other (Alves et al, 2023), but also with other quality requirements of AI systems, such as predictive accuracy (Menon & Williamson, 2018), calibration (Pleiss et al, 2017), impact (Jorgensen et al, 2023), and privacy (Cummings et al, 2019), for which Pareto optimality should be considered (Wei & Niethammer, 2022). Moreover, the choice of a fairness metric requires to take into account several contrasting objectives: stakeholders' utility, human value alignment (Friedler et al, 2021), people's actual perception of fairness (Saha et al, 2020;Srivastava et al, 2019), and legal and normative constraints (Xenidis, 2020;Kroll et al, 2017).…”
Section: Using Fair-ai With a Guidancementioning
confidence: 99%