Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.400
|View full text |Cite
|
Sign up to set email alerts
|

Entities as Experts: Sparse Memory Access with Entity Supervision

Abstract: We focus on the problem of capturing declarative knowledge about entities in the learned parameters of a language model. We introduce a new model-Entities as Experts (EAE)that can access distinct memories of the entities mentioned in a piece of text. Unlike previous efforts to integrate entity knowledge into sequence models, EAE's entity representations are learned directly from text. We show that EAE's learned representations capture sufficient knowledge to answer TriviaQA questions such as "Which Dr. Who vil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
87
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 83 publications
(110 citation statements)
references
References 24 publications
0
87
0
Order By: Relevance
“…Incorporating KGs Most prior works on incorporating KG with text often learn KG entity representations and add them to the mention spans linked to the entity Févry et al, 2020) or create subgraphs relevant to the query that are expanded with text in the embedding space Sun et al, 2019;Xiong et al, 2019). Some others incorporate additional modules.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Incorporating KGs Most prior works on incorporating KG with text often learn KG entity representations and add them to the mention spans linked to the entity Févry et al, 2020) or create subgraphs relevant to the query that are expanded with text in the embedding space Sun et al, 2019;Xiong et al, 2019). Some others incorporate additional modules.…”
Section: Related Workmentioning
confidence: 99%
“…Some others incorporate additional modules. Verga et al (2020) extend Févry et al (2020) by adding a triple memory with (subject, relation) encoding as the key and the object encoding as the value. Das et al (2017) use universal schema (Riedel et al, 2013) that embeds text and KGs in a shared space for their integration.…”
Section: Related Workmentioning
confidence: 99%
“…In addition to , augmenting memory architecture to a language model is a promising research direction. For example, EaE (Févry et al, 2020) and FaE (Verga et al, 2020) jointly train a memory that is interleaved in a transformer and dedicated to entities (or facts) with sparse updates, and access to only a small portion of the memory in inference time. On the other hand, each memory slot in and ours does not have explicit meaning.…”
Section: Memory-augmented Language Modelsmentioning
confidence: 99%
“…Entity linking (EL) fulfils a key role in grounded language understanding: Given an ungrounded entity mention in text, the task is to identify the entity's corresponding entry in a Knowledge Base (KB). In particular, EL provides grounding for applications like Question Answering (Févry et al, 2020b) (also via Semantic Parsing (Shaw et al, 2019)) and Text Generation (Puduppully et al, 2019); it is also an essential component in knowledge base population (Shen et al, 2014). Entities have played a growing role in representation learning.…”
Section: Introductionmentioning
confidence: 99%