Proceedings of the Web Conference 2021 2021
DOI: 10.1145/3442381.3450141
|View full text |Cite
|
Sign up to set email alerts
|

Inductive Entity Representations from Text via Link Prediction

Abstract: Knowledge Graphs (KG) are of vital importance for multiple applications on the web, including information retrieval, recommender systems, and metadata annotation.Regardless of whether they are built manually by domain experts or with automatic pipelines, KGs are often incomplete. To address this problem, there is a large amount of work that proposes using machine learning to complete these graphs by predicting new links. Recent work has begun to explore the use of textual descriptions available in knowledge gr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
44
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 66 publications
(59 citation statements)
references
References 49 publications
0
44
0
Order By: Relevance
“…Pooling entity representations (Baldini Soares et al, 2019;) Embedding relations externally (Wang et al, 2021d;Daza et al, 2021) Treating relations as tokens (Bosselut et al, 2019;Hwang et al, 2021) English Wikipedia has around 6 million entities. This can make pretraining on a larger vocabulary expensive in terms of both time and memory usage (Yamada et al, 2020;.…”
Section: Linking With Middle or Early Fusionmentioning
confidence: 99%
See 1 more Smart Citation
“…Pooling entity representations (Baldini Soares et al, 2019;) Embedding relations externally (Wang et al, 2021d;Daza et al, 2021) Treating relations as tokens (Bosselut et al, 2019;Hwang et al, 2021) English Wikipedia has around 6 million entities. This can make pretraining on a larger vocabulary expensive in terms of both time and memory usage (Yamada et al, 2020;.…”
Section: Linking With Middle or Early Fusionmentioning
confidence: 99%
“…Non-contextual relation embeddings may be learned by defining a separate relation embedding matrix with |R| rows and fusing this matrix into the LM. One advantage of this approach, similar to methods for retrieving external entity embeddings ( § 4.3), is that it supports fusion at both the late (Wang et al, 2021d;Daza et al, 2021) and middle (Liu et al, 2021c) stages. As an example of the former, Wang et al (2021d) propose an LM pretraining objective whereby textual descriptions of KB entities are input to and encoded by an LM, then combined with externally-learned relation embeddings at the output using a link prediction loss (Figure 5b).…”
Section: Relations As Dedicated Embeddingsmentioning
confidence: 99%
“…While most of those approaches only consider graphs with nodes and edges, most knowledge graphs also contain literals, e.g., strings and numeric values. Recently, approaches combining textual information with knowledge graph embeddings using language modeling techniques have also been proposed, using techniques such as word2vec and convolutional neural networks [45] or transformer methods [9,43]. [11] shows a survey of approaches which take such literal information into account.…”
Section: Related Workmentioning
confidence: 99%
“…Portisch et al / Knowledge graph embedding for data mining vs. knowledge graph embedding for link prediction 13h, r, e 1 and h, r, e 2 ,9 or, less strongly, if there exists a chain of such statements. More formally, we can write the notion of similarity between two entities in link prediction approaches as e 1 ≈ e 2 ← ∃t, r : r(e 1 , t) ∧ r(e 2 , t)(19) e 1 ≈ e 2 ← ∃h, r : r(h, e 1 ) ∧ r(h, e 2 ) (…”
mentioning
confidence: 99%
“…For example, KEPLER [35] proposes to encode textual entity descriptions with BERT as their embeddings, and then jointly optimize the KGE and language modeling objectives. BLP [13] trains PLM and KG in an end-to-end manner. Since the language modeling objective of PLM suffer from high computational cost and require a large corpus for training, it is time consuming to apply these methods to large scale KGs.…”
Section: Related Workmentioning
confidence: 99%