2020
DOI: 10.1007/978-3-030-45439-5_7
|View full text |Cite
|
Sign up to set email alerts
|

Graph-Embedding Empowered Entity Retrieval

Abstract: In this research, we improve upon the current state of the art in entity retrieval by re-ranking the result list using graph embeddings. The paper shows that graph embeddings are useful for entity-oriented search tasks. We demonstrate empirically that encoding information from the knowledge graph into (graph) embeddings contributes to a higher increase in effectiveness of entity retrieval results than using plain word embeddings. We analyze the impact of the accuracy of the entity linker on the overall retriev… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 25 publications
0
12
0
Order By: Relevance
“…The entity and word embeddings used for selecting candidate entities are trained on a Wikipedia 2019-07 dump using the Wikipedia2Vec package. 6 Following [9], we set the min-entitycount parameter to zero and used the Wikipedia link graph during training. For the entity disambiguation model, we used GloVe embeddings [20] as suggested in [15].…”
Section: Inputmentioning
confidence: 99%
“…The entity and word embeddings used for selecting candidate entities are trained on a Wikipedia 2019-07 dump using the Wikipedia2Vec package. 6 Following [9], we set the min-entitycount parameter to zero and used the Wikipedia link graph during training. For the entity disambiguation model, we used GloVe embeddings [20] as suggested in [15].…”
Section: Inputmentioning
confidence: 99%
“…This threshold allows for filtering personal entity mentions that do not have the corresponding entities in the conversation history. We used Wikipedia2Vec [60] word and entity embeddings released by Gerritse et al [31]. The threshold τ was set empirically by performing a sweep (on the range [0, 1] in steps of 0.1) using 5-fold cross-validation.…”
Section: Annotation Resultsmentioning
confidence: 99%
“…Experimental setup. Similarly as in previous work on embeddings for information retrieval [13], we implement a re-ranking procedure, by updating a list of document scores assigned by an existing IR system (e.g. BM25).…”
Section: Methodsmentioning
confidence: 99%
“…Knowledge graphs provide a structured way to represent information in the form of entities and relations between them [12]. They have become central to a variety of tasks in the Web, including information retrieval [6,13], question answering [19,43], and information extraction [4,14,26]. Many of these tasks can benefit from distributed representations of entities and relations, also known as embeddings.…”
Section: Introductionmentioning
confidence: 99%