2020
DOI: 10.1109/access.2020.3005987
|View full text |Cite
|
Sign up to set email alerts
|

Joint Transfer of Model Knowledge and Fairness Over Domains Using Wasserstein Distance

Abstract: Owing to the increasing use of machine learning in our daily lives, the problem of fairness has recently become an important topic in machine learning societies. Recent studies regarding fairness in machine learning have been conducted to attempt to ensure statistical independence between individual model predictions and designated sensitive attributes. However, in reality, cases exist in which the sensitive variables of data used for learning models differ from the data upon which the model is applied. In thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 18 publications
0
11
0
Order By: Relevance
“…Inspired by [7,9], we design a EEG-based driver fatigue state classification model that achieves invariant results to individuals, while maintaining its classification accuracy. As shown in Fig.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Inspired by [7,9], we design a EEG-based driver fatigue state classification model that achieves invariant results to individuals, while maintaining its classification accuracy. As shown in Fig.…”
Section: Methodsmentioning
confidence: 99%
“…In this paper, we introduce a unified EEG-based classification model for subjects, motivated by recent AI works of domain adaptation [7], fairness [8,9], long-tailed recognition [10,11] developed to reduce the performance gap between different data groups. Inspired by these methods, we aim to mitigate the variability of EEG signal among individuals in the driver fatigue state classification task.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition, the AI recruitment system used in Amazon gives higher acceptance scores for male applicants than female applicants [1]. These social issues raise criticism of existing AI systems and the need for fairness-aware AI systems in terms of sensitive attributes, such as gender, age, and ethnicity [2]- [6]. As a result, fairness studies are performed in variety of tasks, from a simple classification on structured data [7]- [9] to Natural Language Processing (NLP) [2], face recognition [3], and Generative Adversarial Network (GAN) [4].…”
Section: Introductionmentioning
confidence: 99%