Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 2020
DOI: 10.1145/3375627.3375808
|View full text |Cite
|
Sign up to set email alerts
|

Normative Principles for Evaluating Fairness in Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(26 citation statements)
references
References 6 publications
0
25
0
Order By: Relevance
“…The large gap in the FPR rates, despite addressing historical bias against a minority group, thus poses a problem from this viewpoint. From a political and moral philosophy perspective, judgements on fairness are often driven by broader normative principles [33]. In this regard, an unequal error rate could be used to correct a past bias.…”
Section: Geometric Deep Learning: a Fairer Approach?mentioning
confidence: 99%
“…The large gap in the FPR rates, despite addressing historical bias against a minority group, thus poses a problem from this viewpoint. From a political and moral philosophy perspective, judgements on fairness are often driven by broader normative principles [33]. In this regard, an unequal error rate could be used to correct a past bias.…”
Section: Geometric Deep Learning: a Fairer Approach?mentioning
confidence: 99%
“…Philosophical research directly on fair ML can be roughly divided based on its stance towards the standard approach. Some research seeks to clarify and improve the often‐implicit normative underpinnings and commitments of various fairness measures (Glymour & Herington, 2019; Hellman, 2020; Leben, 2020). Many fairness measures are purportedly based in moral and legal doctrines (e.g., ‘disparate impact’, ‘equality of opportunity’, …), and so we might naturally expect that those statistical measures would track the conceptual intuitions in a wide range of cases.…”
Section: Solutions To Biasmentioning
confidence: 99%
“…Bias-related issues can be identified by a proper analysis of the decisions made by the workflow, which in turn requires models to be accountable and transparent enough to thoroughly characterize their sensitivity to bias, and how inputs and outputs (decisions) correlate in regards to protected features. It is also remarkable to note that several proposals have been made to quantify fairness in machine learning pipelines, yielding useful metrics that account for the parity of models when processing groups of inputs [ 205 , 206 ]. Without these aspects being considered jointly with performance measures, data-based ITS developments in years to come are at the risk of being restricted to the academia playground [ 207 ].…”
Section: Emerging Ai Areas Towards Actionable Itsmentioning
confidence: 99%