Proceedings of the 24th International Conference on World Wide Web 2015
DOI: 10.1145/2740908.2742726
|View full text |Cite
|
Sign up to set email alerts
|

AutoRec

Abstract: This paper proposes AutoRec, a novel autoencoder framework for collaborative filtering (CF). Empirically, AutoRec's compact and efficiently trainable model outperforms stateof-the-art CF techniques (biased matrix factorization, RBM-CF and LLORMA) on the Movielens and Netflix datasets.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
108
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 896 publications
(108 citation statements)
references
References 4 publications
0
108
0
Order By: Relevance
“…AutoRec [99] AutoRec reconstructs partial user profiles (i.e., item recommendation) based on the reconstruction power of auto-encoders .…”
Section: Cf Models Since Early Yearsmentioning
confidence: 99%
“…AutoRec [99] AutoRec reconstructs partial user profiles (i.e., item recommendation) based on the reconstruction power of auto-encoders .…”
Section: Cf Models Since Early Yearsmentioning
confidence: 99%
“…Relevant to our work as well, considerable progress has been made using autoencoder models in application to recommender systems. The paper [23] was among the first works incorporating standard autoencoder frameworks to collaborative filtering. More recent architectures such as MultVAE [15] and RecVAE [24] stepped further by moving towards variational autoencoders and introducing specific loss functions tailored to the task of collaborative filtering.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, we propose encoding the rich contexts first and then combining various encoded contexts as the input of the SCR to speed up behavior prediction. Based on this idea, the linear auto-encoder (LAE) [40] can be improved to achieve our goal.…”
Section: Deep Auto Encodermentioning
confidence: 99%