2018
DOI: 10.48550/arxiv.1802.05814
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Variational Autoencoders for Collaborative Filtering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
22
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 12 publications
(23 citation statements)
references
References 23 publications
0
22
0
Order By: Relevance
“…This model may be used for artificial data generation, data representation, and inference tasks. A couple of examples are collaborative filtering (Liang et al 2018) and image analysis (Wang et al 2017).…”
Section: Autoencodersmentioning
confidence: 99%
“…This model may be used for artificial data generation, data representation, and inference tasks. A couple of examples are collaborative filtering (Liang et al 2018) and image analysis (Wang et al 2017).…”
Section: Autoencodersmentioning
confidence: 99%
“…Weighted Approximate-Rank Pairwise Loss using Matrix Factorization (WARP-MF) [35], which maximizes the rank of positive examples by sampling negative examples until a rank violation occurs. And nally, a Variational Autoencoder with multinomial likelihood (Multi-VAE) [16], a deep learning model that extends variational autoencoders. Each base recommendation algorithm (WARP-MF, BPR-MF, and Multi-VAE) was implemented to identify a set of N candidate recommendations.…”
Section: Preference Calculation: User Rating Trainingmentioning
confidence: 99%
“…In settings where we are not primarily interested in inducing disentangled representations, the β-VAE objective has also been used with β < 1 in order to improve the quality of reconstructions [Alemi et al, 2016, Engel et al, 2017, Liang et al, 2018. While this also decreases the relative weight of to restriction of the mutual information 2 to a subset of "Anchor" variables z a .…”
Section: Related Workmentioning
confidence: 99%