Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1431
|View full text |Cite
|
Sign up to set email alerts
|

Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs

Abstract: Link prediction is an important way to complete knowledge graphs (KGs), while embedding-based methods, effective for link prediction in KGs, perform poorly on relations that only have a few associative triples. In this work, we propose a Meta Relational Learning (MetaR) framework to do the common but challenging few-shot link prediction in KGs, namely predicting new triples about a relation by only observing a few associative triples. We solve few-shot link prediction by focusing on transferring relation-speci… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
114
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 157 publications
(125 citation statements)
references
References 16 publications
1
114
0
Order By: Relevance
“…Additionally, some models exploit the metadata as a source of prior knowledge of knowledge graph entities. Such models learn from the information available such as relation or entity types, which could improve inference about missing triples (Lv et al, 2019;Wang et al, 2019a;Chen et al, 2019). Furthermore, multi-source Knowledge Representation Learning model (Tang et al, 2019) is multimodal in terms of combining features and combining models.…”
Section: Multimodal Embedding Methodsmentioning
confidence: 99%
“…Additionally, some models exploit the metadata as a source of prior knowledge of knowledge graph entities. Such models learn from the information available such as relation or entity types, which could improve inference about missing triples (Lv et al, 2019;Wang et al, 2019a;Chen et al, 2019). Furthermore, multi-source Knowledge Representation Learning model (Tang et al, 2019) is multimodal in terms of combining features and combining models.…”
Section: Multimodal Embedding Methodsmentioning
confidence: 99%
“…For the low resource setting, [44] proposed a one-shot relational learning framework, which learns a matching metric by considering both the learned embeddings and one-hop graph structures. [6] proposed a Meta Relational Learning (MetaR) framework to do few-shot link prediction in KGs. [59] propose a novel framework IterE iteratively learning embeddings and rules which can improve the quality of sparse entity embeddings and their link prediction results.…”
Section: Related Workmentioning
confidence: 99%
“…Some other methods utilize extra textual information to represent entities and relations, for example, DKRL [17], Open-world KGC [12]. And MetaR [4] concentrate on few-shot link prediction in knowledge graph. In this problem, few triplets are given at training time, while in OOKG entities and relations problems, auxiliary triplets are given at testing time.…”
Section: Related Workmentioning
confidence: 99%