Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval 2019
DOI: 10.1145/3331184.3331267
|View full text |Cite
|
Sign up to set email alerts
|

Neural Graph Collaborative Filtering

Abstract: Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not en… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
1,534
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 2,337 publications
(1,708 citation statements)
references
References 30 publications
6
1,534
0
Order By: Relevance
“…Gowalla: This is the check-in dataset [29] obtained from Gowalla, in which users share their locations by check-in. To ensure the quality of the dataset, we retain users and items with at least ten interactions similar to [53].…”
Section: Datasetsmentioning
confidence: 99%
“…Gowalla: This is the check-in dataset [29] obtained from Gowalla, in which users share their locations by check-in. To ensure the quality of the dataset, we retain users and items with at least ten interactions similar to [53].…”
Section: Datasetsmentioning
confidence: 99%
“…whereŷ ui is the prediction score for an (u, i) interaction; f R (·) is abstracted as the interaction function with recommender parameters Θ R ; r u ∈ R d and r i ∈ R d are ID embeddings of user u and item i, respectively; d is the embedding size. Following prior studies [12,23,34], we use the pairwise BPR loss [23] as the objective function to optimize and learn the parameters Θ R . Specifically, it assumes that, for a target user, her historical items reflecting more personal interest should be assigned higher prediction scores, than that of unobserved items, as:…”
Section: Recommendermentioning
confidence: 99%
“…3.3.1 Graph Learning Module. Inspired by recent graph neural networks (GNNs) [10,17,26,34] which are powerful to generate representations for graph data, we employ GraphSage [10] on G and the user-item bipartite graph O + , to embed user, item, and KG entity nodes. In particular, at the l-th graph convolutional layers, a node e receives the information being propagated from its neighbors to update its representation, as:…”
Section: Knowledge Graph Policy Networkmentioning
confidence: 99%
“…SpectralCF [10] designs a spectral convolutional filter to model the CF signals in user-item bipartite graphs. NGCF [11] explicitly models the high-order CF signal in the user-item bipartite graph. It designs a multi-layer graph convolutional network by constructing, then aggregating the messages over the graph.…”
Section: Related Workmentioning
confidence: 99%
“…The basket recommendation problem can thus be defined as predicting the links between basket nodes and item nodes. On top of this graph, we propose a new framework named BasConv to tackle this problem with a graph convolutional neural network (GCNN) [13,11,14]. Different from prior work, we are able to address the following two aspects:…”
Section: Introductionmentioning
confidence: 99%