2024
DOI: 10.1145/3616865
|View full text |Cite
|
Sign up to set email alerts
|

Fairness in Machine Learning: A Survey

Simon Caton,
Christian Haas

Abstract: When Machine Learning technologies are used in contexts that affect citizens, companies as well as researchers need to be confident that there will not be any unexpected social implications, such as bias towards gender, ethnicity, and/or people with disabilities. There is significant literature on approaches to mitigate bias and promote fairness, yet the area is complex and hard to penetrate for newcomers to the domain. This article seeks to provide an overview of the different schools of thought and approache… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 75 publications
(27 citation statements)
references
References 159 publications
0
17
0
Order By: Relevance
“…Fairness and explainability: This is another very important aspect, and it is gaining a lot of attention recently (Caton and Haas 2020). When a deep‐learning model (or any machine‐learning model) is deployed, we need to be careful of how it may treat real‐world entities (in the case of Netflix, members and videos for example), and whether there are any unintentional biases that cause the model to treat some entities in an unfair way.…”
Section: Practical Challengesmentioning
confidence: 99%
“…Fairness and explainability: This is another very important aspect, and it is gaining a lot of attention recently (Caton and Haas 2020). When a deep‐learning model (or any machine‐learning model) is deployed, we need to be careful of how it may treat real‐world entities (in the case of Netflix, members and videos for example), and whether there are any unintentional biases that cause the model to treat some entities in an unfair way.…”
Section: Practical Challengesmentioning
confidence: 99%
“…Pre-processing methods adjust or transform the data to ensure balanced representation and remove discrimination. Examples include resampling (adjusting the data to balance the representation of different groups 3 ), reweighting (assigning different weights to samples to counteract imbalances 16 ), and removing protected attributes (removing features like race, gender, or age that are protected and could lead to biased decisions 28 ). In-processing methods modify the training of the algorithm to incorporate fairness constraints or objectives directly.…”
Section: Resultsmentioning
confidence: 99%
“…Generative AI engines that power CDHs seem realistic, in part, because they reflect our human biases, such as attitudes and inclinations organized around race, gender, and other stereotypes. Trade‐offs between fairness and accurately portraying biased human social interactions generate what Simon Caton and Christian Haas call a “fairness dilemma.” They observe that, when fairness measures are implemented, the emphasis must be put on either fairness or model performance, as improving one can often be detrimental to the other 18 . Over time, users themselves can create further algorithmic bias in their CDHs through machine learning.…”
Section: Ethical Concernsmentioning
confidence: 99%