2019
DOI: 10.48550/arxiv.1908.05611
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GraphSW: a training protocol based on stage-wise training for GNN-based Recommender Model

Abstract: Recently, researchers utilize Knowledge Graph (KG) as side information in recommendation system to address cold start and sparsity issue and improve the recommendation performance. Existing KG-aware recommendation model use the feature of neighboring entities and structural information to update the embedding of currently located entity. Although the fruitful information is benecial to the following task, the cost of exploring the entire graph is massive and impractical. In order to reduce the computational co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…Unlike previous studies, which have primarily focused on exploring novel neural networks, some researchers consider reducing the massive computational cost while maintaining the pattern of extracting features. The GraphSW [66] technique is based on a stage-wise training framework that only examines a subset of KG entities at each stage. In the succeeding steps, the network receives the learned embedding from the previous stages, and the model can gradually learn the information from the KG.…”
Section: Figure 6 the Models Focusing On Capturing High-order Context...mentioning
confidence: 99%
“…Unlike previous studies, which have primarily focused on exploring novel neural networks, some researchers consider reducing the massive computational cost while maintaining the pattern of extracting features. The GraphSW [66] technique is based on a stage-wise training framework that only examines a subset of KG entities at each stage. In the succeeding steps, the network receives the learned embedding from the previous stages, and the model can gradually learn the information from the KG.…”
Section: Figure 6 the Models Focusing On Capturing High-order Context...mentioning
confidence: 99%
“…The VGG-CAM model used pretrained weights from ImageNet as initial weights, which was trained by over a million images containing over 1,000 labeled images (https://www.image-net.org/). [48] assigns priority to features of images. It separates the entire learning process into several sublearning processes, and the ability to extract different levels of image features is achieved through different learning processes.…”
Section: Model Initializationmentioning
confidence: 99%