2015
DOI: 10.1007/978-3-319-24592-8_12
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Offline Evaluations, Online Evaluations, and User Studies in the Context of Research-Paper Recommender Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
49
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 64 publications
(49 citation statements)
references
References 26 publications
0
49
0
Order By: Relevance
“…In one experiment, removing stop words increased recommendation effectiveness by 50 % (CTR = 4.16 % vs. CTR = 6.54 %) (Beel et al 2013e). In another experiment, effectiveness was almost the same (CTR = 5.94 % vs. CTR = 6.31 %) (Beel and Langer 2015). Similarly, in one experiment, the 'stereotype' recommendation approach was around 60 % more effective than in another experiment (CTR = 3.08 % vs. CTR = 4.99 %) (Beel et al 2014a(Beel et al , 2015b.…”
Section: Introductionmentioning
confidence: 72%
See 2 more Smart Citations
“…In one experiment, removing stop words increased recommendation effectiveness by 50 % (CTR = 4.16 % vs. CTR = 6.54 %) (Beel et al 2013e). In another experiment, effectiveness was almost the same (CTR = 5.94 % vs. CTR = 6.31 %) (Beel and Langer 2015). Similarly, in one experiment, the 'stereotype' recommendation approach was around 60 % more effective than in another experiment (CTR = 3.08 % vs. CTR = 4.99 %) (Beel et al 2014a(Beel et al , 2015b.…”
Section: Introductionmentioning
confidence: 72%
“…Herlocker et al (2004) wrote an article on how to evaluate collaborative-filtering approaches. Various authors showed that offline and online evaluations often provide contradictory results (Cremonesi et al 2012;McNee et al 2002), and several more papers about various aspects of recommender-system evaluation have been published (Amatriain et al 2009;Beel and Langer 2015;Bollen and Rocha 2000;Bogers and van den Bosch 2007;Cremonesi et al 2011;Domingues Garcia et al 2012;Ge et al 2010;Hayes et al 2002;Hofmann et al 2014;Jannach et al 2012;Knijnenburg et al 2011Knijnenburg et al , 2012Konstan and Riedl 2012;Manouselis and Verbert 2013;Pu et al 2011Pu et al , 2012Said 2013;Shani and Gunawardana 2011). However, while many of the findings in these papers are important with respect to reproducibility, the authors did not mention or discuss their findings in the context of reproducibility.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…offline evaluations and user studies. We do not discuss this issue here but refer to a recent publication, in which we showed that online evaluations are preferable over offline evaluations, and that CTR seems to be the most sensible metric for our purpose [14]. In that publication we also explain why showing only the title of the recommended papers is sufficient for our evaluation, instead of showing further information such as author name and publication year.…”
Section: Methodsmentioning
confidence: 98%
“…In one experiment to assess the effectiveness of a recommendation approach, removing stopwords increased recommendation effectiveness by 50 % [6]. In another experiment, effectiveness was almost the same [5]. Similarly, Lu et al [14] found that sometimes terms from an article's abstract performed better than terms from the article's body, but in other cases they observed the opposite.…”
mentioning
confidence: 93%