Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/183
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Graph Representation with Jointly Structural and Textual Encoding

Abstract: The objective of knowledge graph embedding is to encode both entities and relations of knowledge graphs into continuous low-dimensional vector spaces. Previously, most works focused on symbolic representation of knowledge graph with structure information, which can not handle new entities or entities with few facts well. In this paper, we propose a novel deep architecture to utilize both structural and textual information of entities. Specifically, we introduce three neural models to encode the valuable inform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
56
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 92 publications
(56 citation statements)
references
References 20 publications
0
56
0
Order By: Relevance
“…NTN [37] is the earliest model to integrate text descriptions into KG embedding learning [47], and the representations of entities are initialized by average vectors of words contained in their names. TEKE [49] defines context vectors of entities and relations, and combines context vectors into traditional models, e.g., TransE, Xie et al [51] and Xu et al [52] encode textual literals by convolutional and recurrent neural networks. LiteralE [24] replaces the original entity embeddings of conventional loss functions with literal-enriched vectors, which are defined by learnable parametrized functions.…”
Section: Kg Embedding Techniquesmentioning
confidence: 99%
“…NTN [37] is the earliest model to integrate text descriptions into KG embedding learning [47], and the representations of entities are initialized by average vectors of words contained in their names. TEKE [49] defines context vectors of entities and relations, and combines context vectors into traditional models, e.g., TransE, Xie et al [51] and Xu et al [52] encode textual literals by convolutional and recurrent neural networks. LiteralE [24] replaces the original entity embeddings of conventional loss functions with literal-enriched vectors, which are defined by learnable parametrized functions.…”
Section: Kg Embedding Techniquesmentioning
confidence: 99%
“…Compared with these studies, we are the first to incorporate multi-head graph attention (Sukhbaatar et al, 2015;Madotto et al, 2018;Veličković et al, 2018) to encourage the model to capture multi-aspect relevance among nodes. Similar to (Wang and Li, 2016;Xu et al, 2017), we enrich entity representation by combining the contextual sentences that include the target entity and its neighbors from the graph structure. This is the first work to incorporate new idea creation via link prediction into automatic paper writing.…”
Section: Related Workmentioning
confidence: 99%
“…Because the TransE model represents a one-to-one relationship between two entities [7], the relationship amongst many entities must be continuously incorporated into texts and knowledge graph [44]. This model combines a convolutional neural network with textual information extraction, which fully exploits the semantic information in a knowledge graph and text [45,46]. In Figure 4, knowledge graph contains rich semantics in entity description texts, but it is not fully utilized in feature extraction.…”
Section: Combined Featuresmentioning
confidence: 99%