2022
DOI: 10.1002/asi.24648
|View full text |Cite
|
Sign up to set email alerts
|

FAIR: Fairness‐aware information retrieval evaluation

Abstract: With the emerging needs of creating fairness-aware solutions for search and recommendation systems, a daunting challenge exists of evaluating such solutions. While many of the traditional information retrieval (IR) metrics can capture the relevance, diversity, and novelty for the utility with respect to users, they are not suitable for inferring whether the presented results are fair from the perspective of responsible information exposure. On the other hand, existing fairness metrics do not account for user u… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 38 publications
0
10
0
Order By: Relevance
“…Biega et al [33] introduced some measures and methods to deal with position bias in ranking which leads to unfairness and less attention to low-ranking items. Ruoyuan et al [34] considered the problem of measuring the user utility in ranking algorithms and introduced an integrated metric for evaluating fairness-aware ranking results.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Biega et al [33] introduced some measures and methods to deal with position bias in ranking which leads to unfairness and less attention to low-ranking items. Ruoyuan et al [34] considered the problem of measuring the user utility in ranking algorithms and introduced an integrated metric for evaluating fairness-aware ranking results.…”
Section: Related Workmentioning
confidence: 99%
“…CG does not consider the position of items in the recommendation list. DCG fills this gap by considering the position of each item along with its score, and is defined [34] as in Eq. ( 23):…”
Section: Evaluation Metricsmentioning
confidence: 99%
See 1 more Smart Citation
“…To address concerns about algorithmic fairness and bias in personalized recommendation systems, recent research has focused on developing fairness-aware models and evaluation metrics (Gao et al, 2022;Lalor et al, 2024;Zhang et al, 2023). These studies aim to mitigate bias and ensure equitable treatment across diverse user groups, promoting diversity and inclusivity in personalized recommendations.…”
Section: Personalized Information Access and User Profile Modeling: R...mentioning
confidence: 99%
“…In addition, these works do not consider fairness constraints. Fair Federated Learning for Recommendation: Fair ML methods for RSs have been extensively explored in centralized settings compared to federated settings (Wang et al 2023;Li, Ge, and Zhang 2021;Gao, Ge, and Shah 2022). The availability of the whole dataset makes the application of existing fairness notions straightforward in centralized learning, whereas it is challenging to apply fairness in FL.…”
Section: Related Workmentioning
confidence: 99%