2020
DOI: 10.48550/arxiv.2004.07202
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Entities as Experts: Sparse Memory Access with Entity Supervision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
36
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(37 citation statements)
references
References 0 publications
1
36
0
Order By: Relevance
“…The entity having an alias with the smallest edit distance (Levenshtein, 1966) to the predicted text output is taken as the predicted entity. Entities as experts: Févry et al (2020) proposed EaE, a model which aims to integrate entity knowledge into a transformer-based language model. For temporal KGQA on CRONQUES-TIONS, we assume that all grounded entity and time mention spans are marked in the question 1 .…”
Section: Other Methods Comparedmentioning
confidence: 99%
See 1 more Smart Citation
“…The entity having an alias with the smallest edit distance (Levenshtein, 1966) to the predicted text output is taken as the predicted entity. Entities as experts: Févry et al (2020) proposed EaE, a model which aims to integrate entity knowledge into a transformer-based language model. For temporal KGQA on CRONQUES-TIONS, we assume that all grounded entity and time mention spans are marked in the question 1 .…”
Section: Other Methods Comparedmentioning
confidence: 99%
“…On this new dataset, we apply approaches based on deep language models (LM) alone, such as T5 (Raffel et al, 2020), BERT (Devlin et al, 2019), and KnowBERT (Peters et al, 2019), and also hybrid LM+KG embedding approaches, such as Entities-as-Experts (Févry et al, 2020) and Em-bedKGQA (Saxena et al, 2020). We find that these baselines are not suited to temporal reasoning.…”
Section: Introductionmentioning
confidence: 99%
“…KBQA A number of early approaches in ODQA focused on using structured KBs (Berant et al, 2013) such as Freebase (Bollacker et al, 2007), with recent examples from Févry et al (2020) and Verga et al (2020). This approach often has high precision but suffers when KB doesn't match user requirements, or where the schema limits what knowledge can be stored.…”
Section: Related Workmentioning
confidence: 99%
“…Use of knowledge augmented neural networks had been explored in pre-Transformer era as well (Weston et al, 2014;Sukhbaatar et al, 2015). More recently, in the context of Transformers, Févry et al (2020) utilized an explicit key-value memory to store entity representations, which are trained along with the rest of model in an end-to-end manner. Verga et al (2020) build on Févry et al (2020), and introduced Facts as Expert (FaE) model with explicit symbolic memory of (subject, relation, object) triples based on end-to-end trained entity representations.…”
Section: Related Workmentioning
confidence: 99%
“…More recently, in the context of Transformers, Févry et al (2020) utilized an explicit key-value memory to store entity representations, which are trained along with the rest of model in an end-to-end manner. Verga et al (2020) build on Févry et al (2020), and introduced Facts as Expert (FaE) model with explicit symbolic memory of (subject, relation, object) triples based on end-to-end trained entity representations. Notably, one of the motivations behind FaE is the ease of updating knowledge by directly modifying the content of the explicit symbolic memory.…”
Section: Related Workmentioning
confidence: 99%