Boise State ScholarWorks
DOI: 10.18122/b2gq53
|View full text |Cite
|
Sign up to set email alerts
|

Balanced Neighborhoods for Fairness-Aware Collaborative Recommendation

Abstract: Recent work on fairness in machine learning has begun to be extended to recommender systems. While there is a tension between the goals of fairness and of personalization, there are contexts in which a global evaluations of outcomes is possible and where equity across such outcomes is a desirable goal. In this paper, we introduce the concept of a balanced neighborhood as a mechanism to preserve personalization in recommendation while enhancing the fairness of recommendation outcomes. We show that a modified ve… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
145
0
2

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 98 publications
(152 citation statements)
references
References 11 publications
(9 reference statements)
0
145
0
2
Order By: Relevance
“…We later randomly sample 100 products and manually decide if these products preserve any gender constraints based on their descriptions. Although 4 out of 100 products exhibit gender implications, 6 we don't find any strict constraints which prevent the unfavorable user identity group from consuming these products.…”
Section: Electronicsmentioning
confidence: 80%
See 2 more Smart Citations
“…We later randomly sample 100 products and manually decide if these products preserve any gender constraints based on their descriptions. Although 4 out of 100 products exhibit gender implications, 6 we don't find any strict constraints which prevent the unfavorable user identity group from consuming these products.…”
Section: Electronicsmentioning
confidence: 80%
“…A fairness-aware tensor-based algorithm is proposed to address the absolute statistical parity (i.e., items are expected to be presented at the same rate across groups) [32]. Several fairness metrics and their corresponding algorithms are proposed for both pointwise prediction frameworks [6,30] and pairwise ranking frameworks [3]. Methodologically, these algorithms can be summarized as reweighting schemes where underrepresented samples are upweighted [6,15,26] or schemes where additional fairness terms are added to regularize the model [1,3,30].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, Gillen et al [15] considered the fairness problem in online learning scenarios where the main objective is to minimize a game theoretic notion of regret. Also, fairness is studied in many other machine learning settings, including ranking [7], personalization and recommendation [4,5,25], data summarization [6], targeted advertisement [38], fair PCA [34], empirical risk minimization [9,19], privacy preserving [12] and a welfare-based measure of fairness [20]. Finally, due to the massive size of today's datasets, practical algorithms with fairness criteria should be able to scale.…”
Section: Related Workmentioning
confidence: 99%
“…. $15.00 https://doi.org/10.1145/3289600.3291002 might offer unfair or unequal quality of service to indvidual (or groups of) users [5,8] or lead to societal polarization by increasing the divergence between preferences of individual (or groups of) users [13].…”
Section: Introductionmentioning
confidence: 99%