2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE) 2021
DOI: 10.1109/ase51524.2021.9678568
|View full text |Cite
|
Sign up to set email alerts
|

Did You Do Your Homework? Raising Awareness on Software Fairness and Discrimination

Abstract: Machine Learning is a vital part of various modern day decision making software. At the same time, it has shown to exhibit bias, which can cause an unjust treatment of individuals and population groups. One method to achieve fairness in machine learning software is to provide individuals with the same degree of benefit, regardless of sensitive attributes (e.g., students receive the same grade, independent of their sex or race). However, there can be other attributes that one might want to discriminate against … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

4
4

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 34 publications
0
8
0
Order By: Relevance
“…To improve the fairness-performance trade-off for ML models, Chen et al [45] used an ensemble approach, which combined models trained for different objectives (i.e., fairness and performance metrics). Hort and Sarro [46] showed that while bias of ML models can be reduced, this can come at the cost of losing the ability to differentiate between desired features.…”
Section: Realising Fair Softwarementioning
confidence: 99%
“…To improve the fairness-performance trade-off for ML models, Chen et al [45] used an ensemble approach, which combined models trained for different objectives (i.e., fairness and performance metrics). Hort and Sarro [46] showed that while bias of ML models can be reduced, this can come at the cost of losing the ability to differentiate between desired features.…”
Section: Realising Fair Softwarementioning
confidence: 99%
“…Hort and Sarro [163] observed another side effect of fairness repair: it could cause loss of discriminatory behaviours of anti-protected attributes. Anti-protected attributes refer to the attributes that one might want the ML decision to depend upon (e.g., students with homework should receive higher grades).…”
Section: Algorithm Testingmentioning
confidence: 99%
“…Fairness is a critical non-functional testing property of data-driven applications and machine learning software [42]. As such, it has received an increasing attention from both the software engineering [9,12,20,41] and machine learning research communities [6,37]. Among others, Brun et al [9] named this "software fairness" and called for software engineers to combat such discrimination and build fair software.…”
Section: Fairness In Software Engineeringmentioning
confidence: 99%