Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 2019
DOI: 10.1145/3306618.3314236
|View full text |Cite
|
Sign up to set email alerts
|

Fair Transfer Learning with Missing Protected Attributes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 61 publications
(44 citation statements)
references
References 12 publications
0
44
0
Order By: Relevance
“…Hence, despite good intentions, not having these relevant attributes, classes or tuples creates obstacles to check the bias constraints, whereas biases on these private-sensitive attributes might still exist due to remaining other attributes correlated with the protected ones. Thus, more work on understanding the interactions between privacy and unfairness [83], and on accurately inferring the missing attributes from the available data is needed [35,42]. This would be part of the checking process of the bias constraints.…”
Section: Formalizing the Tensions Between Privacy And Fairnessmentioning
confidence: 99%
“…Hence, despite good intentions, not having these relevant attributes, classes or tuples creates obstacles to check the bias constraints, whereas biases on these private-sensitive attributes might still exist due to remaining other attributes correlated with the protected ones. Thus, more work on understanding the interactions between privacy and unfairness [83], and on accurately inferring the missing attributes from the available data is needed [35,42]. This would be part of the checking process of the bias constraints.…”
Section: Formalizing the Tensions Between Privacy And Fairnessmentioning
confidence: 99%
“…Ref. [ 7 ] tackles the absence of protected attributes using transfer learning from a different dataset that does have protected attributes. Ref.…”
Section: Related Workmentioning
confidence: 99%
“…The majority of these algorithms assume that the protected attribute is accurately specified for the training dataset, which is then used to mitigate unwanted biases by processing the input dataset or modifying the training algorithm (in-processing) or post-processing the output of the prediction algorithm. However, the protected attribute is often unavailable or anonymized for legal reasons [ 5 , 6 , 7 ].…”
Section: Introductionmentioning
confidence: 99%
“…So far, few studies have considered transfer of a model's knowledge on fairness to other domains. In [23], a fair transfer learning problem was addressed via instance-based domain adaptation technique, under the assumption that the sensitive attribute for only one of either the source or the target domain was available. Madras et al [11] partially considered the problem by testing whether the fairness of their method (LAFTR) was preserved over different tasks.…”
Section: B Domain Adaptationmentioning
confidence: 99%