Proceedings of the 10th ACM Conference on Recommender Systems 2016
DOI: 10.1145/2959100.2959176
|View full text |Cite
|
Sign up to set email alerts
|

Contrasting Offline and Online Results when Evaluating Recommendation Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
32
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 65 publications
(35 citation statements)
references
References 12 publications
2
32
0
Order By: Relevance
“…The hunt for better accuracy values dominates research activities in this area, even though it is not even clear if slightly higher accuracy values are relevant in terms of adding value for recommendation consumers or providers [20,22,52]. In fact, a number of research works exist that indicate that higher accuracy does not necessarily translate into betterreceived recommendations [4,9,13,31,37].…”
Section: Progress Assessmentmentioning
confidence: 99%
“…The hunt for better accuracy values dominates research activities in this area, even though it is not even clear if slightly higher accuracy values are relevant in terms of adding value for recommendation consumers or providers [20,22,52]. In fact, a number of research works exist that indicate that higher accuracy does not necessarily translate into betterreceived recommendations [4,9,13,31,37].…”
Section: Progress Assessmentmentioning
confidence: 99%
“…Next to offline evaluations possible through these data sets, it is worth noting that online evaluations still play an extensive role in evaluating a recommender system. User actual reactions might differ drastically from predictions made from offline data [78].…”
Section: Future Workmentioning
confidence: 98%
“…This number can be variable. For example, in the MovieLens dataset 500, 1000, 1500 and 2000 items may be selected randomly [28,36].…”
Section: Related Workmentioning
confidence: 99%