2021
DOI: 10.1007/s11257-020-09285-1
|View full text |Cite
|
Sign up to set email alerts
|

A flexible framework for evaluating user and item fairness in recommender systems

Abstract: One common characteristic of research works focused on fairness evaluation (in machine learning) is that they call for some form of parity (equality) either in treatment -meaning they ignore the information about users' memberships in protected classes during training -or in impact -by enforcing proportional beneficial outcomes to users in different protected classes. In the recommender systems community, fairness has been studied with respect to both users' and items' memberships in protected classes defined … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
4
2

Relationship

2
8

Authors

Journals

citations
Cited by 50 publications
(47 citation statements)
references
References 70 publications
(84 reference statements)
0
47
0
Order By: Relevance
“…As a consequence, there is no such notion of groups. MADR (Mean Average Difference-Ranking) and GCE (General Cross-Entropy) [9] are also measures applicable to situations where a sensitive attribute is defined, and recommendation results are evaluated according to it. These are also not applicable in our context.…”
Section: Fairness In Recommendationsmentioning
confidence: 99%
“…As a consequence, there is no such notion of groups. MADR (Mean Average Difference-Ranking) and GCE (General Cross-Entropy) [9] are also measures applicable to situations where a sensitive attribute is defined, and recommendation results are evaluated according to it. These are also not applicable in our context.…”
Section: Fairness In Recommendationsmentioning
confidence: 99%
“…Studies also show empirically that popular recommendation algorithms work better for males since many datasets are male-user-dominated [13]. One way to measure gender and age fairness of different recommendation models is based on generalized cross entropy (GCE) [10,11]; specifically, this work shows that a simple popularity-based algorithm provides better recommendations to male users and younger users, while on the opposite side, uniform random recommendations and collaborative filtering algorithms provide better recommendations to female users and older users [11]. Lin et al [29] study how different recommendation algorithms change the preferences for specific item categories (e.g., Action vs.…”
Section: Fairness/bias In Recommendation Systemsmentioning
confidence: 99%
“…For this, recent advances from the Natural Language Processing community could be convenient, where Neural Networks may generate realistic pieces of text in several domains [9,20]. Besides content attributes, including sensitive attributes in the set of controled (or simulated) information to be generated will allow to explore fairness [14] analyses in a more comprehensive way than what is being done right now. Finally, an interesting perspective that could be promising is to shift the focus of the application, as mentioned earlier, from the data to the algorithms so that the sampling strategy instead of sampling data should sample algorithms.…”
Section: Open Questionsmentioning
confidence: 99%