2019
DOI: 10.1007/978-3-030-30793-6_25
|View full text |Cite
|
Sign up to set email alerts
|

Ontology Completion Using Graph Convolutional Networks

Abstract: Many methods have been proposed to automatically extend knowledge bases, but the vast majority of these methods focus on finding plausible missing facts, and knowledge graph triples in particular. In this paper, we instead focus on automatically extending ontologies that are encoded as a set of existential rules. In particular, our aim is to find rules that are plausible, but which cannot be deduced from the given ontology. To this end, we propose a graphbased representation of rule bases. Nodes of the conside… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 14 publications
(17 citation statements)
references
References 94 publications
0
10
0
Order By: Relevance
“…There exist numerous approaches to knowledge base completion, which make use of various neural network architectures described in review papers [55,61]. Most of them apply low-dimensional graph embeddings [13,72], deep learning architectures like autoencoders [86] or graph convolutional networks [23,41,70]. Another group of approaches makes use of tensor decomposition approaches: Tucker decomposition [3] and Canonical Polyadic decomposition (CANDECOMP/PARAFAC) [39,40].…”
Section: Related Workmentioning
confidence: 99%
“…There exist numerous approaches to knowledge base completion, which make use of various neural network architectures described in review papers [55,61]. Most of them apply low-dimensional graph embeddings [13,72], deep learning architectures like autoencoders [86] or graph convolutional networks [23,41,70]. Another group of approaches makes use of tensor decomposition approaches: Tucker decomposition [3] and Canonical Polyadic decomposition (CANDECOMP/PARAFAC) [39,40].…”
Section: Related Workmentioning
confidence: 99%
“…There exist numerous approaches to knowledge base completion, which make use of various neural network architectures described in review papers [13,14]. Most of them apply low-dimensional graph embeddings [15,16], deep learning architectures like autoencoders [17] or graph convolutional networks [18][19][20]. Another group of approaches makes use of tensor decomposition approaches: Tucker decomposition [21] and Canonical Polyadic decomposition (CAN-DECOMP/PARAFAC) [22,23].…”
Section: Related Workmentioning
confidence: 99%
“…However, static word vectors remain important in applications where word meaning has to be modelled in the absence of (sentence) context. For instance, static word vectors are needed for zero-shot image classification (Socher et al, 2013) and zero-shot entity typing (Ma et al, 2016), for ontology alignment (Kolyvakis et al, 2018) and completion (Li et al, 2019), taxonomy learning (Bordea et al, 2015(Bordea et al, , 2016, or for representing query terms in information retrieval systems (Nikolaev and Kotov, 2020). Moreover, Liu et al (2020) recently found that static word vectors can complement CLMs, by serving as anchors for contextualized vectors, while Alghanmi et al (2020) found that incorporating static word vectors could improve the performance of BERT for social media classification.…”
Section: Introductionmentioning
confidence: 99%