Proceedings of the 2nd International Workshop on Information Heterogeneity and Fusion in Recommender Systems 2011
DOI: 10.1145/2039320.2039325
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid algorithms for recommending new items

Abstract: Despite recommender systems based on collaborative filtering typically outperform content-based systems in terms of recommendation quality, they suffer from the new item problem, i.e., they are not able to recommend items that have few or no ratings. This problem is particularly acute in TV applications, where the catalog of available items (e.g., TV programs) is very dynamic. On the contrary, content-based recommender systems are able to recommend both old and new items but the general quality of the recommen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0
1

Year Published

2013
2013
2019
2019

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(35 citation statements)
references
References 23 publications
0
34
0
1
Order By: Relevance
“…The four different metrics (precision, recall, fallout, and F1) were used for measuring the prediction accuracy and comparing the performance of mentioned recommender algorithms. Precision plays a great role in instances where some sets of best results are required out of several possible alternatives [36][37][38].This measurement is the share of top results that are relevant. In this study, the relevant items defined include academic items visited by users and that are rated with more than 2 stars.…”
Section: Methods and Measurementsmentioning
confidence: 99%
“…The four different metrics (precision, recall, fallout, and F1) were used for measuring the prediction accuracy and comparing the performance of mentioned recommender algorithms. Precision plays a great role in instances where some sets of best results are required out of several possible alternatives [36][37][38].This measurement is the share of top results that are relevant. In this study, the relevant items defined include academic items visited by users and that are rated with more than 2 stars.…”
Section: Methods and Measurementsmentioning
confidence: 99%
“…Also, EPG-content data differ significantly from VOD consumption data, since (i) the number of user feedbacks is by several order of magnitude larger than for VOD data (per user few thousand vs a couple [3]), (ii) but much noisier, (iii) the number of recommendable items is smaller at a given moment (only now or soon aired programs) (iv) and the lifetime of individual items is shorter. Therefore, one needs to preprocess user log data to extract information useful for user preference modeling.…”
Section: Algorithmsmentioning
confidence: 95%
“…After preprocessing, the dataset was split into a train and test sets; the latter contained 686k events of the last day. 3 To characterize the cold-start phenomenon in the dataset, we measured new item and new event ratios in the test set. We used various heuristics to group items together along various item attributes.…”
Section: B Offline Experimentsmentioning
confidence: 99%
“…The most similar items are then proposed to the user. Similarity between two vectors can be expressed by several metrics or algorithms (Ricci et al, 2011), such as cosine similarity (Berkovsky et al, 2007), Singular Value Decomposition (SVD) (Antonelli and Francini, 2009), Latent Semantic Analysis (LSA) (Antonelli and Francini, 2009), Filtered Feature Augmentation (Cremonesi et al, 2011b), Similarity Injection Knn (Cremonesi et al, 2011b), and so on. 2.…”
Section: Content Filtering Algorithmsmentioning
confidence: 99%