Fourteenth ACM Conference on Recommender Systems 2020
DOI: 10.1145/3383313.3412488
|View full text |Cite
|
Sign up to set email alerts
|

Neural Collaborative Filtering vs. Matrix Factorization Revisited

Abstract: Embedding based models have been the state of the art in collaborative filtering for over a decade. Traditionally, the dot product or higher order equivalents have been used to combine two or more embeddings, e.g., most notably in matrix factorization. In recent years, it was suggested to replace the dot product with a learned similarity e.g. using a multilayer perceptron (MLP). This approach is often referred to as neural collaborative filtering (NCF). In this work, we revisit the experiments of the NCF paper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

11
68
0
1

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 290 publications
(109 citation statements)
references
References 24 publications
11
68
0
1
Order By: Relevance
“…The dot product is widely used to calculate similarity. Unless the dataset is large or the embedding dimension is very small, the inner product is a suitable choice as a measure for embeddings [4].…”
Section: Prediction Layermentioning
confidence: 99%
See 1 more Smart Citation
“…The dot product is widely used to calculate similarity. Unless the dataset is large or the embedding dimension is very small, the inner product is a suitable choice as a measure for embeddings [4].…”
Section: Prediction Layermentioning
confidence: 99%
“…Recommender systems play a pivotal role in alleviating information overload [1] and are widely used in applications such as Web search, e-commerce, and entertainment. Current recommendation systems mainly rely on collaborative filtering (CF) [2]- [4] to provide useful results. CF is a technique based on a user's historical data (e.g., browsing history, comments) and has attracted much attention for recommendation systems due to its benefits.…”
Section: Introductionmentioning
confidence: 99%
“…R( f ) is the set of rated scores belonging to feature f , r(v, j) is the rated score for item j of user v, ω(u, f ) is the weight of feature f of user u, and ω(i, f ) is the weight of feature f of item i. The range of the values of the feature weight ω is −1 to 1, and the value that is computed through Equation 26is used in Equations (28) and (29). In GF, ω is optimized to compute how close the feature f was to the popular bias.…”
Section: Vanilla Modelmentioning
confidence: 99%
“…The λ used in Equations (26) and (27) is the learning rate, and |GF(s, u, i)| and |PF(s, u, i)| are the absolute values of score. The value computed with Equation 26learns about the ω used in GF through Equations (28) and (29), and the value computed with Equation (27) learns about the ω in PF through Equation (29). Because PF is computed using only IF(i), only Equation (29) is used to learn the ω.…”
Section: Vanilla Modelmentioning
confidence: 99%
See 1 more Smart Citation