2021
DOI: 10.1109/access.2021.3110200
|View full text |Cite
|
Sign up to set email alerts
|

AttrHIN: Network Representation Learning Method for Heterogeneous Information Network

Abstract: Network representation learning can map complex network to low dimensional vector space, capture the topological properties of network, and reduce the time complexity and space complexity of the algorithm. However, most of the existing network representation learning (NRL) methods are for homogeneous networks, while the real-world networks are usually heterogeneous, therefore, it is more practical to provide an intelligent insight into the evolution of heterogeneous networks. In this paper, we propose a novel … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 32 publications
(27 reference statements)
1
1
0
Order By: Relevance
“…Additionally, they increase model flexibility by decomposing user preferences into different anchor vector representations, making them better suited to adapt to varying user requirements and deliver more personalized recommendations. However, regarding why MGAT outperforms MMGCN, it relates to differences in model structures, a conclusion that aligns with the findings in Reference 16 . In summary, these factors collectively contribute to MGAT and MMGCN surpassing DEEPWALK in terms of recommendation performance and, in some aspects, making MGAT perform even better.…”
Section: Methodssupporting
confidence: 86%
See 1 more Smart Citation
“…Additionally, they increase model flexibility by decomposing user preferences into different anchor vector representations, making them better suited to adapt to varying user requirements and deliver more personalized recommendations. However, regarding why MGAT outperforms MMGCN, it relates to differences in model structures, a conclusion that aligns with the findings in Reference 16 . In summary, these factors collectively contribute to MGAT and MMGCN surpassing DEEPWALK in terms of recommendation performance and, in some aspects, making MGAT perform even better.…”
Section: Methodssupporting
confidence: 86%
“…To address these challenges, the MGAT model 16 first utilizes user-item interaction records and pre-trained modality-specific features to construct a unimodal graph for each modality. Subsequently, it employs Graph Attention Networks (GAT) and Gated Recurrent Units (GRU) to learn high-order neighbor information within the graph and capture local semantic information, further enhancing recommendation performance.…”
Section: Introductionmentioning
confidence: 99%