2022
DOI: 10.1016/j.knosys.2021.107909
|View full text |Cite
|
Sign up to set email alerts
|

EIGAT: Incorporating global information in local attention for knowledge representation learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…HRAN (Li, Liu, et al, 2021) takes full advantage of the heterogeneity of KG when aggregating neighbour information, but they ignore the overall structure. Unlike many models that focus only on information about 1‐hop neighbourhoods, EIGAT (Zhao et al, 2022) and GFA‐NN (Sadeghi et al, 2021) take into account the global characteristics of the entities, but still unable to perform cross‐relational information transfer. Within the recent years, most of the emerging KGE models utilize complex neural structures, such as tensor networks, graph convolutional networks, and transformers, to learn richer representations.…”
Section: Related Workmentioning
confidence: 99%
“…HRAN (Li, Liu, et al, 2021) takes full advantage of the heterogeneity of KG when aggregating neighbour information, but they ignore the overall structure. Unlike many models that focus only on information about 1‐hop neighbourhoods, EIGAT (Zhao et al, 2022) and GFA‐NN (Sadeghi et al, 2021) take into account the global characteristics of the entities, but still unable to perform cross‐relational information transfer. Within the recent years, most of the emerging KGE models utilize complex neural structures, such as tensor networks, graph convolutional networks, and transformers, to learn richer representations.…”
Section: Related Workmentioning
confidence: 99%
“…Dong et al [24] propose a graph node representation learning model by maximizing the mutual information between the current node and neighbor nodes. The EIGAT [25] algorithm proposes a method of incorporating global features into local attention for knowledge representation learning, which combines global features appropriately into the GAT model family by using scaled entity importance, which is computed by an attention-based global random walk strategy. The MEGNN [26] algorithm proposes a meta path extraction GNNs for heterogeneous graphs, which combine different bipartite graphs related to edge types into a new trainable graph structure.…”
Section: Related Workmentioning
confidence: 99%
“…These approaches can be divided into four categories: (i) translation-based models, which consider the translation operation between entity and relation embedding, such as TransE [10] and TransH [10]; (ii) factorization-bsaed models, which assume KG as a third-order tensor matrix, and the triple score can be carried out through matrix decomposition. Such as RESCAL [11], HOLE [12]; (iii) CNN-based models, which employ convolutional neural networks to determine the scores of triples, such as ConvE [13] and ConvKB [14]; and (iv) Graph neural network-based models, which extend convolution operations onto non-Euclidean graph structures, such as RGCN [15], KBGAT [16], EIGAT [17] and CompGCN [18].…”
Section: B Related Workmentioning
confidence: 99%