2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology 2012
DOI: 10.1109/wi-iat.2012.135
|View full text |Cite
|
Sign up to set email alerts
|

Serendipitous Personalized Ranking for Top-N Recommendation

Abstract: Serendipitous recommendation has benefitted both e-retailers and users. It tends to suggest items which are both unexpected and useful to users. These items are not only profitable to the retailers but also surprisingly suitable to consumers' tastes. However, due to the imbalance in observed data for popular and tail items, existing collaborative filtering methods fail to give satisfactory serendipitous recommendations. To solve this problem, we propose a simple and effective method, called serendipitous perso… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
50
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 38 publications
(52 citation statements)
references
References 24 publications
(28 reference statements)
1
50
0
Order By: Relevance
“…A change of serendipity might affect other properties of a recommender system. To demonstrate the dependence of different properties and features of the baselines, we employed evaluation metrics to measure four properties of recommender systems: (1) accuracy, as it is a common property (Kotkov et al, 2016b), (2) serendipity, as SPR, Zhengs, SOG and SOGBasic are designed to improve this property (Lu et al, ;Zheng et al, 2015), (3) diversity, as this is one of the objectives of TD, SOG and SOGBasic (Ziegler et al, 2005) and (4) novelty, as SPR, Zhengs, SOG and SOGBasic are designed to improve this property (Lu et al, ;Zheng et al, 2015).…”
Section: Evaluation Metricsmentioning
confidence: 99%
See 3 more Smart Citations
“…A change of serendipity might affect other properties of a recommender system. To demonstrate the dependence of different properties and features of the baselines, we employed evaluation metrics to measure four properties of recommender systems: (1) accuracy, as it is a common property (Kotkov et al, 2016b), (2) serendipity, as SPR, Zhengs, SOG and SOGBasic are designed to improve this property (Lu et al, ;Zheng et al, 2015), (3) diversity, as this is one of the objectives of TD, SOG and SOGBasic (Ziegler et al, 2005) and (4) novelty, as SPR, Zhengs, SOG and SOGBasic are designed to improve this property (Lu et al, ;Zheng et al, 2015).…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…We selected the 100 most popular items for PM following one of the most common strategies (Zheng et al, 2015;Lu et al, ). Items relevant to user u are represented by REL u , REL u = {i ∈ TestSet u |r ui > θ}, where TestSet u is a ground truth for user u, while θ is the threshold rating, which in our experiments equals to 3.…”
Section: Evaluation Metricsmentioning
confidence: 99%
See 2 more Smart Citations
“…In fact, it has been recognized that item popularity distribution in most real-world recommendation datasets has a heavy tail [17,13], following an approximate power-law or exponential distribution 4 . Accordingly, most non-positive items drawn by uniform sampling in Algorithm 1 are unpopular due to the long tail distribution, and thus contribute less to the desired loss function.…”
Section: Static and Context-independent Samplermentioning
confidence: 99%