2020
DOI: 10.1007/978-3-030-61527-7_11
|View full text |Cite
|
Sign up to set email alerts
|

$$\mathsf {FABBOO}$$ - Online Fairness-Aware Learning Under Class Imbalance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 17 publications
(17 citation statements)
references
References 20 publications
0
17
0
Order By: Relevance
“…Our contributions are summarized as follows: i) we propose FABBOO, a fairness-and class imbalance-aware boosting approach that is able to tackle class imbalance as well as mitigate different parity-based discriminatory outcomes, ii) we introduce the notion of cumulative fairness in streams, which accounts for cumulative discriminatory outcomes, iii) our experiments, in a variety of real-world and synthetic datasets, show that our approach outperforms existing fairness-aware approaches. This work is an extension of our previous work [23]. The major changes include: i) modifying the distribution update part by also reducing the majority weights, ii) extending FABBOO to facilitate another parity-based notion of fairness, namely predictive equality, iii) adding two real-world datasets to the experimental evaluation, iv) adding a recently published state-of-the-art imbalance-aware stream classifier [2] for comparison, iv) providing a detailed analysis w.r.t FABBOO's hyper-parameters selection.…”
mentioning
confidence: 68%
“…Our contributions are summarized as follows: i) we propose FABBOO, a fairness-and class imbalance-aware boosting approach that is able to tackle class imbalance as well as mitigate different parity-based discriminatory outcomes, ii) we introduce the notion of cumulative fairness in streams, which accounts for cumulative discriminatory outcomes, iii) our experiments, in a variety of real-world and synthetic datasets, show that our approach outperforms existing fairness-aware approaches. This work is an extension of our previous work [23]. The major changes include: i) modifying the distribution update part by also reducing the majority weights, ii) extending FABBOO to facilitate another parity-based notion of fairness, namely predictive equality, iii) adding two real-world datasets to the experimental evaluation, iv) adding a recently published state-of-the-art imbalance-aware stream classifier [2] for comparison, iv) providing a detailed analysis w.r.t FABBOO's hyper-parameters selection.…”
mentioning
confidence: 68%
“…Previous researches consider sex as a protected attribute (Iosifidis & Ntoutsi, 2019; Iosifidis & Ntoutsi, 2020; Ristanoski et al, 2013). Attribute race = { white , black , asian‐pac‐islander , amer‐indian‐eskimo , other } could be also employed as a protected attribute because it has the same role as in the original Adult dataset.…”
Section: Datasets For Fairnessmentioning
confidence: 99%
“…Applying generalization methods by training a model that is robust to various distribution shifts, researchers examine the effects of robustness on fairness. Iosifidis et al [25,26] apply pre-processing and distribution shifts methods in streaming classification framework in which the dataset or the model has to be corrected at each time step so the model stays stable. Singh et al [44] utilize causal learning to build a model that is insensitive to distribution shifts that might occur in the features (i.e., covariate shift).…”
Section: Related Workmentioning
confidence: 99%