Proceedings of the 7th ACM Conference on Recommender Systems 2013
DOI: 10.1145/2507157.2507222
|View full text |Cite
|
Sign up to set email alerts
|

An analysis of tag-recommender evaluation procedures

Abstract: Since the rise of collaborative tagging systems on the web, the tag recommendation task -suggesting suitable tags to users of such systems while they add resources to their collection -has been tackled. However, the (offline) evaluation of tag recommendation algorithms usually suffers from difficulties like the sparseness of the data or the cold start problem for new resources or users. Previous studies therefore often used so-called post-cores (specific subsets of the original datasets) for their experiments.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(16 citation statements)
references
References 13 publications
0
16
0
Order By: Relevance
“…Furthermore, this component supports various data enrichment and transformation methods such as p-core pruning [8], topic modeling [25], training/test set spli ing [20] and data conversion into related formats (e.g., for MyMediaLite [9]).…”
Section: Data Processingmentioning
confidence: 99%
“…Furthermore, this component supports various data enrichment and transformation methods such as p-core pruning [8], topic modeling [25], training/test set spli ing [20] and data conversion into related formats (e.g., for MyMediaLite [9]).…”
Section: Data Processingmentioning
confidence: 99%
“…As outlined in Section 1 above, it was crucial for us to benchmark the algorithms in the unfiltered datasets without p-core pruning to avoid a biased evaluation and to simulate a real-world folksonomy setting (see also [2]). This is especially important for the development of live recommender services.…”
Section: Datasetsmentioning
confidence: 99%
“…Request permissions from Permissions@acm.org. p-core pruned datasets which does not reflect a real-world folksonomy setting as shown by Doerfel et al [2]. With that regard, this study aims to provide a transparent and reproducible evaluation of various tag recommender algorithms in real-world folksonomies.…”
Section: Introductionmentioning
confidence: 99%
“…To reduce computational effort, we randomly selected 20% of the CiteULike user profiles [16] (the other datasets were processed in full size). We did not use a p-core pruning approach to avoid a biased evaluation (see [12]) but excluded all posts assigned to unique resources, i.e., resources that have only been bookmarked once (see [31] datasets as well as our used dataset samples after the exclusion of the posts assigned to unique resources are shown in Table 2.…”
Section: Datasetsmentioning
confidence: 99%