2022
DOI: 10.1162/tacl_a_00460
|View full text |Cite
|
Sign up to set email alerts
|

Multilingual Autoregressive Entity Linking

Abstract: We present mGENRE, a sequence-to- sequence system for the Multilingual Entity Linking (MEL) problem—the task of resolving language-specific mentions to a multilingual Knowledge Base (KB). For a mention in a given language, mGENRE predicts the name of the target entity left-to-right, token-by-token in an autoregressive fashion. The autoregressive formulation allows us to effectively cross-encode mention string and entity names to capture more interactions than the standard dot product between mention and entity… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
147
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 82 publications
(148 citation statements)
references
References 50 publications
(100 reference statements)
1
147
0
Order By: Relevance
“…As shown in Table 2, models that were not trained on this specific task perform it very poorly. Entity linking is a task that is normally better performed by models which are explicitly designed for it (Cao et al, 2020). We nevertheless include it to showcase the ability of neural retrievers to adapt to it, and note how well the multi-task retriever performs on it in spite of its unusual nature.…”
Section: Universal Retrievalmentioning
confidence: 99%
See 1 more Smart Citation
“…As shown in Table 2, models that were not trained on this specific task perform it very poorly. Entity linking is a task that is normally better performed by models which are explicitly designed for it (Cao et al, 2020). We nevertheless include it to showcase the ability of neural retrievers to adapt to it, and note how well the multi-task retriever performs on it in spite of its unusual nature.…”
Section: Universal Retrievalmentioning
confidence: 99%
“…Finally two entity linkers, GENRE (Cao et al, 2020) and BLINK , are worth mentioning. Being trained specifically for entity linking, these models will generally outperform retrieval-based approaches on that task.…”
Section: Related Workmentioning
confidence: 99%
“…Considering the great scale and variable relations in Wikipedia, graph-based techniques generally don't apply, and big models are widely adopted in representing the text features. BLINK [132] encode contexts and entity descriptions via big models, while GENRE [133] builds an autoregressive model to directly output the unique name of chosen entities. These new models may also be viewed as "pre-trained entity linking models", as they are trained on the large-scale Wikipedia corpus, and can be easily transferred to specific datasets by fine-tuning on in-domain training data.…”
Section: Knowledge Graph Completion and Integrationmentioning
confidence: 99%
“…The inference of Seq2seq EL applies beam search (Sutskever et al, 2014) with targets constrained to the name set S by a prefix tree (constructed by name set S). Unlike mGENRE Cao et al (2021b) using provided candidates to decrease the size of the prefix tree, we use the whole name set S instead.…”
Section: Seq2seq Elmentioning
confidence: 99%