2020
DOI: 10.3390/info11080388
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge-Enhanced Graph Neural Networks for Sequential Recommendation

Abstract: With the rapid increase in the popularity of big data and internet technology, sequential recommendation has become an important method to help people find items they are potentially interested in. Traditional recommendation methods use only recurrent neural networks (RNNs) to process sequential data. Although effective, the results may be unable to capture both the semantic-based preference and the complex transitions between items adequately. In this paper, we model separated session sequences into session g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 25 publications
0
11
0
Order By: Relevance
“…This includes a variety of attribute representations aggregation methods [2,13,19,20] (e.g., average, sum, element-wise multiplication, etc. ), concatenation [3,13,31], parallel processing [9,16], usage of attention-based Transformer layer [30] and graph-based representations [27]. Furthermore, attributes have been utilized for introducing additional mask-based pre-training tasks [32] (e.g., predict an item's attribute).…”
Section: Related Workmentioning
confidence: 99%
“…This includes a variety of attribute representations aggregation methods [2,13,19,20] (e.g., average, sum, element-wise multiplication, etc. ), concatenation [3,13,31], parallel processing [9,16], usage of attention-based Transformer layer [30] and graph-based representations [27]. Furthermore, attributes have been utilized for introducing additional mask-based pre-training tasks [32] (e.g., predict an item's attribute).…”
Section: Related Workmentioning
confidence: 99%
“…Graph GNN Sequential Modeling SURGE [14] item-item graph GAT RNN [145] item-item graph GCN Attention ISSR [91] item-item and user-item graph GCN RNN MA-GNN [100] item-item graph GCN Memory network DGSR [199] user-item graph GAT RNN GES-SASRec [215] item-item graph GCN RNN RetaGNN [59] temporal heterogeneous graph GAT Self Attention TGSRec [36] temporal user-item graph GAT GAT SGRec [85] item-item graph GAT GAT GME [179] item-item and user-item graph GAT GAT STP-UDGAT [90] item-item and user-item graph GAT GAT GPR [12] item-item and user-item graph GCN GCN learning long-term user interests through other networks. The learned multiple representations are fused together and used for final recommendation.…”
Section: Modelmentioning
confidence: 99%
“…Since GNN has the ability of high-order relationship modeling by aggregating information from neighbor nodes, after fusing multiple sequences into one graph, it can learn representations of both users and items in different sequences, which can't be accomplished by Markov model or recurrent neural network. Wang et al [145] propose a simple method that directly converts the sequence information into directed edges on the graph and then uses GNN to learn representations. Liu et al [91] construct a user-item bipartite graph and an item-item graph at the same time, where the edges of the item-item graph indicate co-occurrence in a sequence, with edge weights assigned according to the number of occurrences.…”
Section: Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, some research [12,24,27] have conducted sequential recommendations based on KGs and user-item interactions. For instance, Baocheng et al [24] use a hybrid of graph neural network and a key-value memory network to extract users' sequential interest and semantic-based preference, which improves the strategy for constructing session graphs from interaction sequences for the sequential recommendation task. To solve the user-commodity sparseness in , a knowledge-guided reinforcement learning model is proposed, which designs a composite reward function to compute both sequence and knowledge level rewards.…”
Section: Sequential Explainable Recommendationmentioning
confidence: 99%