Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization 2022
DOI: 10.1145/3503252.3531292
|View full text |Cite
|
Sign up to set email alerts
|

Top-N Recommendation Algorithms: A Quest for the State-of-the-Art

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(11 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…Their paper shows several significant differences in performance between user groups scoring high vs. groups scoring low on several personality traits. Vito Walter Anelli et al [17] establish a common understanding of the state-of-the-art for top-n recommendation tasks. The results of the research show that there is no consistent winner across datasets and metrics for the examined top-n recommendation task.…”
Section: Introductionmentioning
confidence: 99%
“…Their paper shows several significant differences in performance between user groups scoring high vs. groups scoring low on several personality traits. Vito Walter Anelli et al [17] establish a common understanding of the state-of-the-art for top-n recommendation tasks. The results of the research show that there is no consistent winner across datasets and metrics for the examined top-n recommendation task.…”
Section: Introductionmentioning
confidence: 99%
“…The Beauty dataset contains product reviews and ratings from Amazon. In line with prior research [1], we pre-processed the dataset to include at least five interactions per user and item (p-core = 5). The Delivery Hero dataset contains anonymous QCommerce sessions for dark store and local shop orders.…”
Section: Methodsmentioning
confidence: 99%
“…Under this light, we can observe a parallel between research in Natural Language Processing (NLP) and sequential recommendation, where novel recommendation models are inspired by NLP models [6]. GRU4Rec [14] adopted the Gated Recurrent Unit (GRU) mechanism from [5], SASRec [21] used the transformer architecture from [37], 1 https://github.com/dh-r/LLM-Sequential-Recommendation and BERT4Rec [35] adopted BERT [7]. The influence of NLP research to sequential recommendation models extends naturally to Large Language Models (LLMs).…”
Section: Introductionmentioning
confidence: 99%
“…The initial step in this work is to address missing entries in the ratings dataset, as these gaps disrupt the user interest patterns. To accomplish this, a user-user trustcentric approach, as proposed by Awati et al in [33] and J. Bobadila et al [34], is used for filling the missing entries. Once the missing entry-filled dataset is pre-processed, it undergoes vectorization for further processing.…”
Section: Methodsmentioning
confidence: 99%