The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.48550/arxiv.2204.08807
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-level Cross-view Contrastive Learning for Knowledge-aware Recommender System

Ding Zou,
Wei Wei,
Xian-Ling Mao
et al.

Abstract: Knowledge graph (KG) plays an increasingly important role in recommender systems. Recently, graph neural networks (GNNs) based model has gradually become the theme of knowledge-aware recommendation (KGR). However, there is a natural deficiency for GNN-based KGR models, that is, the sparse supervised signal problem, which may make their actual performance drop to some extent. Inspired by the recent success of contrastive learning in mining supervised signals from data itself, in this paper, we focus on explorin… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…For the problems of dispersion of sequence recommendations, correlation between items and skewness of length distribution, two information enhancement operators were designed to extract high-quality graphs to solve the above problems [21]. For the data sparsity problem existing on the knowledge graph, the performance of the model was effectively improved by treating multilevel graphs as entity and relationship graphs [22]. Even though the advantages of self-supervised learning are huge, it is less applied to multiple interactions of fine-grained users.…”
Section: Self-supervised Representation Learningmentioning
confidence: 99%
“…For the problems of dispersion of sequence recommendations, correlation between items and skewness of length distribution, two information enhancement operators were designed to extract high-quality graphs to solve the above problems [21]. For the data sparsity problem existing on the knowledge graph, the performance of the model was effectively improved by treating multilevel graphs as entity and relationship graphs [22]. Even though the advantages of self-supervised learning are huge, it is less applied to multiple interactions of fine-grained users.…”
Section: Self-supervised Representation Learningmentioning
confidence: 99%
“…Recently, contrastive learning has renewed a surge of interest [36][37][38] in zero-shot learning, including tasks at both the node and graph levels. Several works [39,40] have noted the ability of contrastive learning to mine supervised signals from the data itself, which can be used to address the problem of sparse supervised signals that exist in zeroshot learning. Recent works have successfully applied contrastive learning to zero-shot learning tasks [41][42][43][44].…”
Section: Contrastive Learning For Zslmentioning
confidence: 99%
“…σ is the ELU non-linear function. After that we need to define positive and negative samples for learning, inspired by other applications of comparative learning [25,26], we define positive and negative samples as shown in Figure 3. The same node of another view for the target node is treated as the positive sample, the other nodes of the same view are treated as the intra-view negative sample, and the nodes of another view except for the positive sample are treated as the inter-view negative sample.…”
Section: Graph Cross-view Contrastive Optimizationmentioning
confidence: 99%