Proceedings of the 7th ACM Conference on Recommender Systems 2013
DOI: 10.1145/2507157.2508004
|View full text |Cite
|
Sign up to set email alerts
|

Workshop and challenge on news recommender systems

Abstract: Recommending news articles entails additional requirements to recommender systems. Such requirements include special consumption patterns, fluctuating item collections, and highly sparse user profiles. This workshop (NRS'13@RecSys) brought together researchers and practitioners around the topics of designing and evaluating novel news recommender systems. Additionally, we offered a challenge allowing participants to evaluate their recommendation algorithms with actual user feedback.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
6
3

Relationship

5
4

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 18 publications
0
12
0
Order By: Relevance
“…The plista data set has been released as a part of the ACM RecSys'13 Challenge on News Recommender Systems [13], in order for researchers to be able to develop novel recommendation algorithms due to this data set. The data set contains all interactions on 13 news portals corresponding to a time frame of one month ranging from June 1 -30, 2013.…”
Section: Data Setmentioning
confidence: 99%
“…The plista data set has been released as a part of the ACM RecSys'13 Challenge on News Recommender Systems [13], in order for researchers to be able to develop novel recommendation algorithms due to this data set. The data set contains all interactions on 13 news portals corresponding to a time frame of one month ranging from June 1 -30, 2013.…”
Section: Data Setmentioning
confidence: 99%
“…Compared to other products, however, recommending news has specific challenges [7,22]: news preferences are subject to trends, users do not want to see multiple articles with similar content, and frequently we have insufficient information to profile the reader.…”
Section: Introductionmentioning
confidence: 99%
“…The recommendation task is then to predict the rating that a user provided for an item in the test set. Over the years, various benchmarking campaigns have been organized to promote recommender systems evaluation, e.g., as part of scientific conferences ( [2,21,19]) or as Kaggle 9 competitions (e.g., [18]). Apart from providing static datasets and organizing challenges to benchmark recommendation algorithms using these datasets, the research community has been very active in developing software and open source toolkits for the evaluation of static datasets.…”
Section: Benchmarking In Static Environmentsmentioning
confidence: 99%