2018
DOI: 10.1007/978-3-319-92007-8_34
|View full text |Cite
|
Sign up to set email alerts
|

Reproducibility of Experiments in Recommender Systems Evaluation

Abstract: Recommender systems evaluation is usually based on predictive accuracy metrics with better scores meaning recommendations of higher quality. However, the comparison of results is becoming increasingly difficult, since there are different recommendation frameworks and different settings in the design and implementation of the experiments. Furthermore, there might be minor differences on algorithm implementation among the different frameworks. In this paper, we compare well known recommendation algorithms, using… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2

Relationship

4
3

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 20 publications
(21 reference statements)
0
8
0
Order By: Relevance
“…A major challenge in recommender systems' evaluation is that when a new recommendation algorithm is developed the concept of reproducibility is not considered in depth, thus making a study difficult to be reproduced in the future by other researchers. In our previous work we have shown that it is difficult to reproduce results using different evaluation libraries due to differences in algorithm and metric implementations, while highlighting that the results derived from well-known algorithms that can be found across libraries can be easily reproduced 1 . In recommender systems' evaluation to reproduce experiments it is recommended to follow a set of guidelines 1,2 .…”
Section: Proposed Approachmentioning
confidence: 99%
See 3 more Smart Citations
“…A major challenge in recommender systems' evaluation is that when a new recommendation algorithm is developed the concept of reproducibility is not considered in depth, thus making a study difficult to be reproduced in the future by other researchers. In our previous work we have shown that it is difficult to reproduce results using different evaluation libraries due to differences in algorithm and metric implementations, while highlighting that the results derived from well-known algorithms that can be found across libraries can be easily reproduced 1 . In recommender systems' evaluation to reproduce experiments it is recommended to follow a set of guidelines 1,2 .…”
Section: Proposed Approachmentioning
confidence: 99%
“…During the last few years, research in recommender systems, both in academia and in industry, has led to numerous publications. The popularity of recommender systems' research has led to the increasingly important problem of reproducibility and replication of experiments during the evaluation of such systems 1,2 . The evaluation of recommendation algorithms is important for measuring the quality of the results and for making objective comparisons of algorithms.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…A variety of recommendation systems have been proposed by researchers. The main recommendation systems include [2][3][4][5]: (1) Content-based recommendation systems, which will recommend the goods that a user is interested in based on their historical behaviors;…”
Section: Introductionmentioning
confidence: 99%