2019 IEEE International Conference on Big Data (Big Data) 2019
DOI: 10.1109/bigdata47090.2019.9006487
|View full text |Cite
|
Sign up to set email alerts
|

FAE: A Fairness-Aware Ensemble Framework

Abstract: Automated decision making based on big data and machine learning (ML) algorithms can result in discriminatory decisions against certain protected groups defined upon personal data like gender, race, sexual orientation etc. Such algorithms designed to discover patterns in big data might not only pick up any encoded societal biases in the training data, but even worse, they might reinforce such biases resulting in more severe discrimination. The majority of thus far proposed fairness-aware machine learning appro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
17
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 36 publications
(19 citation statements)
references
References 12 publications
(16 reference statements)
1
17
0
Order By: Relevance
“…In reality though the low discrimination scores are just an artifact of the low prediction rates for the minority class. This observation has been made in [20,21,22] but for the static case. We observe the same issue for the streaming case and therefore, we propose an imbalance monitoring mechanism based on which we adapt the weighted training distribution.…”
supporting
confidence: 68%
See 3 more Smart Citations
“…In reality though the low discrimination scores are just an artifact of the low prediction rates for the minority class. This observation has been made in [20,21,22] but for the static case. We observe the same issue for the streaming case and therefore, we propose an imbalance monitoring mechanism based on which we adapt the weighted training distribution.…”
supporting
confidence: 68%
“…Finally, CSMOTE is amplifying unfairness (and causes reverse discrimination in NYPD dataset) by re-sampling the instances. A reason for such behavior can be the amplification of existing encoded biases in the data through instance re-sampling (also reported in [20]).…”
Section: Results On Cumulative Statistical Paritymentioning
confidence: 99%
See 2 more Smart Citations
“…Recent studies [12,14] also showed that many datasets in the domain are imbalanced and typically, the class-imbalance problem is more severe for the protected group (e.g female, black, etc. ), which is often underrepresented in the important minority class [13]. Despite high overall accuracy and fairness, methods ignoring imbalance may still perform poorly on the minority class, thus amplifying prevalent biases.…”
Section: Introductionmentioning
confidence: 99%