Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval 2020
DOI: 10.1145/3397271.3401051
|View full text |Cite
|
Sign up to set email alerts
|

Fairness-Aware Explainable Recommendation over Knowledge Graphs

Abstract: There has been growing attention on fairness considerations recently, especially in the context of intelligent decision making systems. Explainable recommendation systems, in particular, may suffer from both explanation bias and performance disparity. In this paper, we analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups. We show that inactive users may be more susceptible to receiving unsatisfactory recommendat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
68
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 127 publications
(68 citation statements)
references
References 36 publications
0
68
0
Order By: Relevance
“…Ge et al [16] explore long-term fairness in recommendation and accomplish the problem through dynamic fairness learning. Fu et al [15] propose a fairness constrained approach to mitigate the unfairness problem in the context of explainable recommendation over knowledge graphs. They find that performance bias exists between different user groups, and claim that such bias comes from the different distribution of path diversity.…”
Section: Fair Recommendationmentioning
confidence: 99%
“…Ge et al [16] explore long-term fairness in recommendation and accomplish the problem through dynamic fairness learning. Fu et al [15] propose a fairness constrained approach to mitigate the unfairness problem in the context of explainable recommendation over knowledge graphs. They find that performance bias exists between different user groups, and claim that such bias comes from the different distribution of path diversity.…”
Section: Fair Recommendationmentioning
confidence: 99%
“…Interpretability: The most common view of interpretability in RS is to increase the transparency of algorithms [14], [15], [40], [43], [164], which is especially important in health RS. Reliable explanations can greatly improve end-users' confidence in the recommendation results [126].…”
Section: Challengesmentioning
confidence: 99%
“…The FairFace dataset [65] is a collection of ≈100 thousand facial images extracted from the YFCC-100M Flickr dataset [165]. Automated models trained on FairFace can exploit age group (age ranges of [0-2], [3][4][5][6][7][8][9], [10][11][12][13][14][15][16][17][18][19], [20][21][22][23][24][25][26][27][28][29], [30][31][32][33][34][35][36][37][38][39], [40][41][42][43][44][45][46][47][48][49], [50]…”
Section: The Datasetmentioning
confidence: 99%
“…Unfortunately few works in the literature have tried to address more than one of these problems simultaneously. Some have tried to face two of them: for example some works combine fairness with privacy [33][34][35][36][37][38][39][40][41], others [42][43][44] combine adversarial learning with fairness, fairness with explainability [45][46][47], adversarial learning with explainability [48,49], and adversarial learning with privacy [50][51][52][53]. For this reason, in this work, we drive toward the development of systems able to ensure trustworthiness by delivering privacy, fairness, and explainability by design.…”
Section: Introductionmentioning
confidence: 99%