Recommender Systems Handbook 2010
DOI: 10.1007/978-0-387-85820-3_8
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Recommendation Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
634
0
21

Year Published

2013
2013
2017
2017

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 953 publications
(691 citation statements)
references
References 46 publications
3
634
0
21
Order By: Relevance
“…Unlike for information retrieval systems, a consistent notion of relevance has not been established. Shani and Gunawardana [25] distinguish three evaluation methodologies: offline experiments, user studies, and online evaluation. In News-REEL, our focus is on the static environments of offline experimentation and dynamic environments of online evaluation.…”
Section: Evaluation Of News Recommendation Systemsmentioning
confidence: 99%
“…Unlike for information retrieval systems, a consistent notion of relevance has not been established. Shani and Gunawardana [25] distinguish three evaluation methodologies: offline experiments, user studies, and online evaluation. In News-REEL, our focus is on the static environments of offline experimentation and dynamic environments of online evaluation.…”
Section: Evaluation Of News Recommendation Systemsmentioning
confidence: 99%
“…Recommender frameworks can now be found in numerous present day applications that open the client to an enormous accumulations of things [246]. Such frameworks normally give the client a rundown of prescribed things they may incline toward, or foresee the amount they may lean toward everything.…”
Section: About Recommendation Systemsmentioning
confidence: 99%
“…For this purpose, we used Precision, Recall and F1 measures [10]. We can now define recall, precision and F1 for Top n recommender systems in the following way:…”
Section: B Evaluation Measuresmentioning
confidence: 99%