Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Softw 2020
DOI: 10.1145/3368089.3409704
|View full text |Cite
|
Sign up to set email alerts
|

Do the machine learning models on a crowd sourced platform exhibit bias? an empirical study on model fairness

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
68
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 62 publications
(71 citation statements)
references
References 30 publications
(37 reference statements)
1
68
0
2
Order By: Relevance
“…Overall, the most biased stages -TT7(LE), TT8(CT), TT4(CT), TT1(MV), GC8(SS), are improving performance. This stage-specific tradeoff is aligned with the overall performance-fairness tradeoff discussed in prior work [10,17,26], which can be compared quantitatively by the work of Hort et al [36]. Third, we found that some stages decrease the performance, either accuracy or f1 score.…”
Section: Fairness-performance Tradeoffsupporting
confidence: 84%
See 1 more Smart Citation
“…Overall, the most biased stages -TT7(LE), TT8(CT), TT4(CT), TT1(MV), GC8(SS), are improving performance. This stage-specific tradeoff is aligned with the overall performance-fairness tradeoff discussed in prior work [10,17,26], which can be compared quantitatively by the work of Hort et al [36]. Third, we found that some stages decrease the performance, either accuracy or f1 score.…”
Section: Fairness-performance Tradeoffsupporting
confidence: 84%
“…This comparison provides the necessary data to compute the four fairness metrics. Similar to [10,26], for each stage in a pipeline, we run this experiment ten times, and then report the mean and standard deviation of the metrics, to avoid inconsistency of the randomness in the ML classifiers. Finally, we followed the ML best practices so that noise is not introduced evaluating the fairness of preprocessing stages.…”
Section: Experiments Designmentioning
confidence: 99%
“…Harrison et al [29] studied the perceived fairness of humans in regards to ML models. Biswas and Hridesh [8] studied the fairness of ML models on crowd-sourced platforms. Finkelstein et al [25] explored fairness in requirement analysis, and showed different needs among customers.…”
Section: Software Engineering For ML Fairnessmentioning
confidence: 99%
“…It would also be interesting to go beyond accuracy bugs to detect and localize more non-functional bugs, e.g. fairness bugs [77].…”
Section: II Co N C L U S Io N S a N D Fu T U R E W O R Kmentioning
confidence: 99%