Companion of the the Web Conference 2018 on the Web Conference 2018 - WWW '18 2018
DOI: 10.1145/3184558.3186949
|View full text |Cite
|
Sign up to set email alerts
|

User Fairness in Recommender Systems

Abstract: Recent works in recommendation systems have focused on diversity in recommendations as an important aspect of recommendation quality. In this work we argue that the post-processing algorithms aimed at only improving diversity among recommendations lead to discrimination among the users. We introduce the notion of user fairness which has been overlooked in literature so far and propose measures to quantify it. Our experiments on two diversification algorithms show that an increase in aggregate diversity results… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
23
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 53 publications
(23 citation statements)
references
References 5 publications
0
23
0
Order By: Relevance
“…Beutel et al [7] show how to measure fairness based on pairwise comparisons from randomized experiments, and offer a regularizer to improve fairness when training recommendation models. Leonhardt et al [23] quantify the user unfairness caused by the post-processing algorithms which have the original goal of improving diversity in recommendations. Ge et al [16] explore long-term fairness in recommendation and accomplish the problem through dynamic fairness learning.…”
Section: Fair Recommendationmentioning
confidence: 99%
“…Beutel et al [7] show how to measure fairness based on pairwise comparisons from randomized experiments, and offer a regularizer to improve fairness when training recommendation models. Leonhardt et al [23] quantify the user unfairness caused by the post-processing algorithms which have the original goal of improving diversity in recommendations. Ge et al [16] explore long-term fairness in recommendation and accomplish the problem through dynamic fairness learning.…”
Section: Fair Recommendationmentioning
confidence: 99%
“…Burke [14] and Abdollahpouri and Burke [1] categorized different types of multi-stakeholder platforms and introduced several desired group fairness properties. Leonhardt et al [38] identified the unfairness issue for users in post-processing algorithms to improve the diversity in recommendation. Mehrotra et al [43] proposed a heuristic strategy to jointly optimize fairness and performance in two-sided marketplace platforms.…”
Section: Fair Recommendationmentioning
confidence: 99%
“…In recommender systems, researchers observed popularity and demographic disparity of the current user-centric applications and recommender systems, with different demographic groups obtain different utility from the recommender systems [8,9,20]. Researchers empirically showed that, the post-processing technique that improves recommendation diversity would amplify user unfairness [22]. Researchers proposed four new metrics for collaborative filtering based recommendation with a binary sensitive attribute, in order to measure the discrepancy between the prediction behavior for disadvantages users and advantaged users.…”
Section: Related Workmentioning
confidence: 99%