2018
DOI: 10.1609/aaai.v32i1.11573
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional 2D Knowledge Graph Embeddings

Abstract: Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models — which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also sho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
459
0
2

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 1,398 publications
(674 citation statements)
references
References 22 publications
1
459
0
2
Order By: Relevance
“…The function of the generator is to provide high-quality negative triplets to improve the discriminator. The convolution structure is proven to be efficient in KBC (Dettmers et al, 2018). To our knowledge, we are the first to apply a convolution structure to generate negative triplets.…”
Section: Generatormentioning
confidence: 99%
See 4 more Smart Citations
“…The function of the generator is to provide high-quality negative triplets to improve the discriminator. The convolution structure is proven to be efficient in KBC (Dettmers et al, 2018). To our knowledge, we are the first to apply a convolution structure to generate negative triplets.…”
Section: Generatormentioning
confidence: 99%
“…The sufficient feature interaction brought from convolutional layers can also generate high-quality negative triplets to deceive the discriminator. Here, we use ConvE (Dettmers et al, 2018) to exemplify the feasibility of the structure, while we attempted depthwise separable convolution (Chollet, 2017) and involution (Li et al, 2021) in the extended experiments. The score function in the generator is given as:…”
Section: Generatormentioning
confidence: 99%
See 3 more Smart Citations